text
stringlengths 4
602k
|
|---|
Lines And Angles are the basic shapes in geometry. Lines are figures that are made up of infinite points extending indefinitely in both directions. Lines are straight and have negligible depth or width. There are a variety of lines you will learn about, such as perpendicular lines, intersecting lines, transversal lines, etc. An angle is a figure in which two rays emerge from a common point. You may also come across alternate and corresponding angles in this field. Geometry shapes and their properties are the most practical branch of mathematics. Mostly this concept has been taught in Class 7 and Class 9.
Definition of Lines and Angles
As we have discussed, both lines and angles form the base for any shape in geometry. We cannot draw a two-dimensional to three-dimensional shape without using lines and angles. Thus, it is very necessary to learn the definitions of both terms.
Here, the basic definitions and properties of lines and also for angles are given. It will give the students a basic knowledge of these geometrical terms.
What are Lines?
A line is a straight one-dimensional figure, that extends in the opposite directions infinitely. A line can be horizontal or vertical. It can be drawn from left to right or top to bottom.
What are Angles?
Angles are the shape that is formed when the endpoints of two rays meet at a single point. They are measured in degrees (°) or radians. A complete rotation is equal to an angle of 360 degrees. It is represented by the symbol ‘∠’.
Types of Lines and Angles
There are various types of lines and angles in geometry based on the measurements and different scenarios. Let us learn here all those lines and angles along with their definitions.
Types of Lines
Lines are basically categorized as:
- Line segment
Based on concepts or operations performed on lines, they are;
- Parallel Lines
- Perpendicular Lines
A line segment is a part of a line with two end-points. It is the shortest distance between two points and has a fixed length.
A ray is a part of a line, which has a starting point and extends infinitely in one direction.
When two lines form a right angle with each other, by meeting at a single point, are called perpendicular lines. In the figure, you can see, lines AB and CD are perpendicular to each other.
Two lines are said to be parallel when they do not meet at any point in a plane or which do not intersects each other. In the figure, lines PQ and RS are parallel to each other.
When a line intersects two lines at distinct points, it is called a transversal. In the figure, a transversal l is intersecting two lines at point P and Q.
Types of Angles
Angles are basically classified as:
- Acute Angle(<90°)
- Right Angle(=90°)
- Obtuse Angle(>90°)
- Straight Angle(=180°)
And based on the relation between two angles, conceptual wise, they are;
- Supplementary Angles
- Complementary Angles
- Adjacent Angles
- Vertically Opposite Angles
If the inclination between the arms is less than a right angle, it is called an acute angle.
If the inclination between the arms is more than a right angle, it is called an obtuse angle.
If the arms form an angle of 90 degrees between them, it is called a right angle.
If the arms form an angle of 180 degrees between them, it is called a straight angle.
Two angles which sum up to 90 degrees are called complementary angles.
Two angles which sum up to 180 degrees are called supplementary angles.
Two angles which have a common side and a common vertex are called adjacent angles. In the following figure, ∠α and ∠β are adjacent angles.
Vertically Opposite Angles
Two angles which are formed, opposite to each other, when two lines intersect at a common point or vertex, are called vertically opposite angles. In the figure, given below;
∠POR = ∠SOQ and ∠POS = ∠ROQ
Video Lesson on Lines
For More Information On Lines, Watch The Below Video:
Video Lessons on Types and Parts of Angles
Types of angles
Parts of Angles
Properties of Lines and Angles
Similar to other shapes and sizes in geometry, lines and angles also have their own properties. Let us see what are they.
Properties of Lines
- Collinear points are a set of three or more points which lie on the same line.
- The points which do not lie on the same line are called non-collinear points.
Note: Three points can be either collinear or non-collinear, but not both together at the same time.
Properties of Angles
- An angle is a figure in which two rays emerge from a common point. This point is called the vertex of the angle and the two rays forming the angle are called its arms or sides.
- An angle which is greater than 180 degrees but less than 360 degrees is called a reflex angle.
- If two adjacent angles add up to 180 degrees, they form a linear pair of angles. In the following figure, ∠a and ∠b form a linear pair.
- When two lines intersect each other, the two opposite pairs of angles formed are called vertically opposite angles. In the following figure, ∠A and ∠B are vertically opposite angles. Another pair is ∠C and ∠D.
To solve more problems on Lines and Angles visit our page. Study more by downloading BYJU’S-The Learning App and have a better understanding of the topic.
Frequently Asked Questions on Lines And Angles
What are the five types of Angles?
The five types of angles are:
- Acute Angle
- Right Angle
- Obtuse Angle
- Straight Angle
- Reflex Angle
What are the properties of Lines and Angles?
If two parallel lines are intersected by a transversal then:
- Its vertically opposite angles are always equal
- Its corresponding angles are equal
- Its alternate exterior and interior angles are equal
What are Lines and its types?
Lines are figures that are made up of infinite points extending indefinitely in both directions. The types of lines are:
- Horizontal lines
- Vertical lines
- Parallel lines
- Perpendicular lines
|
Historically, most maps were hand-drawn, but with the advent of computer technology came more advanced maps created with the aid of satellite technology. Geographic information science (GIS), sometimes also referred to as geographic information systems, uses computers and satellite imagery to capture, store, manipulate, analyze, manage, and present spatial data. GIS primarily uses layers of information and is often used to make decisions in a wide variety of contexts. An urban planner might use GIS to determine the best location for a new fire station, while a biologist might use GIS to map the migratory paths of birds. We use GIS to get navigation directions from one place to another, layering place names, buildings, and roads.
One difficulty with map-making, even when using advanced technology, is that the Earth is roughly a sphere while maps are generally flat. When converting the spherical Earth to a flat map, some distortion always occurs. A map projection, or a representation of Earth’s surface on a flat plane, always distort at least one of these four properties: area, shape, distance, and direction. Some maps preserve three of these properties, while significantly distorting another, while other maps seek to minimize overall distortion but distort each property somewhat. So, which map projection is best? That depends on the purpose of the map. The Mercator projection, while significantly distorting the size of places near the poles, preserves angles and shapes, making it ideal for navigation.
The Winkel Tripel projection is so-named because its creator, Oswald Winkel, sought to minimize three kinds of distortion: area, direction, and distance. The National Geographic Society has used it since 1998 as the standard projection of world maps.
All maps have a purpose, whether it is to guide sailing ships, help students create a more accurate mental map of the world, or tell a story. The map projection, color scheme, scale, and labels are all decisions made by the mapmaker. Some argued that the widespread use of the Mercator projection, which made Africa look smaller relative to North America and Eurasia, led people to minimize the importance of Africa’s political and economic issues. Just as texts can be critiqued for their style, message, and purpose, so too can maps be critiqued for the information and message they present.
The spatial perspective, and answering the question of “where,” encompasses more than just static locations on a map. Often, answering the question of “where” relates to movement across space. Diffusion refers to the spreading of something from one place to another, and might relate to the physical movement of people or the spread of disease, or the diffusion of ideas, technology, or other intangible phenomena. Diffusion occurs for different reasons and at different rates. Just as static features of culture and the physical landscape can be mapped, geographers can also map the spread of various characteristics or ideas to study how they interact and change.
When we describe places, we can discuss their absolute and relative location and their relationship and interaction with other places. As regional geographers, we can dig deeper and explore both the physical and human characteristics that make a particular place unique. Geographers explore a wide variety of spatial phenomena, but the discipline can roughly be divided into two branches: physical geography and human geography. Physical geography focuses on natural features and processes, such as landforms, climate, and water features. Human geography is concerned with human activity, such as culture, language, and religion. However, these branches are not exclusive. You might be a physical geographer who studies hurricanes, but your research includes the human impact from these events. You might be a human geographer who studies food, but your investigations include the ecological impact of agricultural systems. Regional geography takes this holistic approach, exploring both the physical and human characteristics of the world’s regions.
Defining a Geographic Information System
Up to this point, the primary concern of this chapter was to introduce concepts essential to geography that are also relevant to geographic information systems (GIS). Furthermore, the introduction of these concepts was prefaced by an overview of how we think spatially and the nature of geographic inquiry. This final section is concerned with defining a GIS, describing its use, and exploring its future.
So what exactly is a GIS? Is it computer software? Is it a collection of computer hardware? Is it a service that is distributed and accessed via the Internet? Is it a tool? Is it a system? Is it a science? The answer to all these questions is, “GIS is all of the above – and more.”
From a software perspective, a GIS consists of a particular type of computer program capable of storing, editing, processing, and presenting geographic data and information as maps. There are several GIS software providers, such as Environmental Systems Research Institute Inc. (Esri), which distributes the ArcGIS platform. Though companies like Google provide online mapping services and interfaces, such as Google Earth and Google Maps, such services are not currently considered fully-fledged GIS platforms. There are also open-source GIS options, such as QGIS, which is freely distributed and maintained by the open-source community. All GIS software, regardless of vendor, consists of a database management system that is capable of handling and integrating two types of data: spatial data and attribute data.
Spatial data refer to the real-world geographic objects of interest, such as streets, buildings, lakes, and countries, and their respective locations. In addition to location, each of these objects also possesses certain traits of interest, or attributes, such as a name, number of stories, depth, or population. GIS software keeps track of both the spatial and attribute data and permits us to link the two types of data together to create information and facilitate analysis. One popular way to describe and to visualize a GIS is picturing it as a cake with many layers. Each layer of the cake represents a different geographic theme, such as water features, buildings, and roads, and each layer is stacked one on top of another.
As hardware, a GIS consists of a computer, memory, storage devices, scanners, printers, global positioning system (GPS) units, and other physical components. If the computer is situated on a network, the network can also be considered an integral component of the GIS because it enables us to share data and information that the GIS uses as inputs and creates as outputs.
As a tool, a GIS permits us to maintain, analyze, and share a wealth of data and information. From the relatively simple task of mapping the path of a hurricane to the more complex task of determining the most efficient garbage collection routes in a city, a GIS is used across the public and private sectors. Online and mobile mapping, navigation, and location-based services are also personalizing and democratizing GIS by bringing maps and mapping to the masses.
These are just a few definitions of a GIS. Like several of the geographic concepts discussed previously, there is no single or universally accepted definition of a GIS. There are probably just as many definitions of GIS as there are people who use GIS. In this regard, it is the people like you who are learning, applying, developing, and studying GIS in new and compelling ways that unify it.
Three Approaches to GIS
In addition to recognizing the many definitions of a GIS, it is also constructive to identify three general and overlapping approaches to understanding GIS – the application approach, the developer approach, and the scientific approach. Though most GIS users would probably identify with one approach more than another, they are not mutually exclusive. As GIS and, more generally, information technology advance, the following categories will be transformed and reshaped accordingly.
The application approach to GIS considers a GIS primarily to be a tool. This is also perhaps the most common view of a GIS. From this perspective, a GIS is used to answer questions, support decision making, maintain an inventory of geographic data and information, and, of course, make maps. As a tool, there are arguably certain skills that should be acquired and required in order to use and apply a GIS properly. The application approach to a GIS is more concerned with using and applying GIS to solve problems than the GIS itself.
For instance, suppose we want to determine the best location for a new supermarket. What factors are important behind making this decision? Information about neighborhood demographics, existing supermarkets, the location of suppliers, zoning regulations, and available real estate are all critical to this decision. A GIS platform can integrate such information that is obtained from the census bureau, realtors, the local zoning agency, and even the Internet. A suitability analysis can then be carried out with the GIS, the output of which will show the best locations for the supermarket given the various local geographic opportunities (e.g., demographics/consumers) and constraints (e.g., supply chain, zoning, and real estate limitations) that exist.
There are several professional communities and organizations concerned with the use and application of a GIS, such as the Urban and Regional Information Systems Association (http://urisa.org) and the Global Spatial Data Infrastructure Association.
Unlike the previous example in which a GIS is applied to answer or solve a particular question, the developer approach to GIS is concerned with the development of the GIS as a software or technology platform. Rather than focusing on how a GIS is used and applied, the developer approach is concerned with improving, refining, and extending the tool and technology itself and is mainly in the realm of computer programmers and software developers.
The ongoing integration and evolution of GIS, maps, the Internet, and web-based mapping can be considered an outcome of the developer approach to GIS. In this regard, delivering maps, navigation tools, and user-friendly GIS to people via the Internet is the central challenge at hand. The underlying logic and computer code that permit us to ask questions about how to get from point A to point B on a navigation website, or to see where a new restaurant or open house is located on a web-based map is the domain of GIS programmers and developers. The Open Source Geospatial Foundation is another example of a community of GIS developers working to build and distribute open-source GIS software.
It is the developer approach to GIS that drives and introduces innovation and is informed and guided by the existing needs and future demands of the application approach. As such, it is indeed on the cutting edge, it is dynamic, and it represents an area for considerable growth in the future.
The scientific approach to GIS not only dovetails with the applications and developer approaches but also is more concerned with broader questions and how geography, cognition, map interpretation, and other geospatial issues such as accuracy and errors are relevant to GIS and vice versa. This particular approach is often referred to as geographic information science (GIScience), and it is also interested in the social consequences and implications of the use and diffusion of GIS technology. From exploring the propagation of error to examining how GIS and related technology are redefining privacy, GIScience is, at the same time, an agent of change as well as one of understanding.
In light of the rapid rate of technological and GIS innovation, in conjunction with the widespread application of GIS, new questions about GIS technology and its use are continually emerging. One of the most discussed topics concerns privacy, and in particular, what is referred to as locational privacy. In other words, who has the right to view or determine your geographic location at any given time? Your parents? Your school? Your employer? Your cell phone carrier? The government or police? When are you willing to divulge your location? Is there a time or place where you prefer to be “off the grid” or not locatable? Such questions concerning locational privacy were of relatively little concern a few years ago. However, with the advent of GPS and its integration into cars and other mobile devices, questions, debates, and even lawsuits concerning locational privacy and who has the right to such information are rapidly emerging.
As the name suggests, the developer approach to GIS is concerned with the development of GIS. Rather than focusing on how a GIS is used and applied, the developer approach is concerned with improving, refining, and extending the tool itself and is mostly in the realm of computer programmers and software developers. For instance, the advent of web-based mapping is an outcome of the developer approach to GIS. In this regard, the challenge was how to bring GIS to people via the Internet and not necessarily how people would use web-based GIS. The developer approach to GIS drives and introduces innovation and is guided by the needs of the application approach. As such, it is indeed on the cutting edge, it is dynamic, and it represents an area for considerable growth in the future.
Future of GIS
The definitions and approaches to GIS described previously illustrate the scope and breadth of this particular type of information technology. Furthermore, as GIS become more accessible and widely distributed, there will always be new questions to be answered, new applications to be developed, and innovative technologies to integrate.
One notable development is the emergence of what is called, web-based GIS or Web GIS. Web GIS refers to the integration of the vast amounts of content available on the Internet (e.g., text, photographs, video, and music) with geographic information, such as location. Adding such geographic information to such content is called geotagging and is similar to geocoding. The integration of geographic information with such content opens up new ways to access, search, organize, share, and distribute information.
Mapping mashups, or web-based applications that combine data and information from one source and map it with online mapping applications, are an example of Web GIS at work. There are mashups for nearly everything that can be assigned a location, from restaurants and music festivals to your photographs and favorite hikes.
The diffusion of GIS and the emergence of Web GIS have increased geographic awareness by lowering the barriers of viewing, using, and even creating maps and related geographic data and information. Though there are several benefits to this democratization of GIS, and more generally, information and technology, it should also be recognized that there are also consequences and implications.
As with any other technology, great care must be taken in the use and application of GIS. For instance, when was the last time you questioned what appeared on a map? For better or worse, maps are among the most authoritative forms of information and are the subject of Chapter 2 “Map Anatomy.” As tomorrow’s GIS practitioners, you will have the ability to influence greatly how decisions are made and how others view and relate to the world with the maps that you create in a GIS environment. What and how you choose to map is, therefore, a nontrivial exercise. Becoming more aware of our biases, limitations, and preferences permit us to take full advantage of geographic information systems with confidence.
|
The term lithic stage refers to the cultures of the post-glacial hunters and collectors in South America. The stage derived its name from the first appearance of Lithic flaked stone tools, throughout South America, there are stone tool traditions of the lithic stage, such as the fluted fishtail that reflect localized adaptations to the diverse habitats of the continent. During the lithic stage people lived in small, mobile groups that survived on hunting, fishing. The intensive and continual use of plants and animals eventually led to genetic changes to some of the species. This lifestyle continued until around 5000 BC when people started to use domesticated plants, one of the leading figures is Alex Krieger who has documented hundreds of sites that have yielded crude, percussion-flaked tools. The most convincing evidence for a stage is based upon data recovered from sites in South America where such crude tools have been found. Examples include the Clovis culture and Folsom tradition groups, the Lithic stage was followed by the Archaic stage
For this reason the alternative terms of Precontact Americas, Pre-Colonial Americas or Prehistoric Americas are in use. In areas of Latin America the term used is Pre-Hispanic. Other civilizations were contemporary with the period and were described in European historical accounts of the time. A few, such as the Maya civilization, had their own written records, because many Christian Europeans of the time viewed such texts as heretical, men like Diego de Landa destroyed many texts in pyres, even while seeking to preserve native histories. Only a few documents have survived in their original languages, while others were transcribed or dictated into Spanish, giving modern historians glimpses of ancient culture. Indigenous American cultures continue to evolve after the pre-Columbian era, many of these peoples and their descendants continue traditional practices, while evolving and adapting new cultural practices and technologies into their lives. Now, the study of pre-Columbian cultures is most often based on scientific.
Asian nomads are thought to have entered the Americas via the Bering Land Bridge, now the Bering Strait, genetic evidence found in Amerindians maternally inherited mitochondrial DNA supports the theory of multiple genetic populations migrating from Asia. Over the course of millennia, Paleo-Indians spread throughout North and South America, exactly when the first group of people migrated into the Americas is the subject of much debate. One of the earliest identifiable cultures was the Clovis culture, with sites dating from some 13,000 years ago, older sites dating back to 20,000 years ago have been claimed. Some genetic studies estimate the colonization of the Americas dates from between 40,000 and 13,000 years ago, the chronology of migration models is currently divided into two general approaches. The first is the short chronology theory with the first movement beyond Alaska into the New World occurring no earlier than 14, 000–17,000 years ago, followed by successive waves of immigrants. The second belief is the long chronology theory, which proposes that the first group of people entered the hemisphere at an earlier date, possibly 50.
In that case, the Eskimo peoples would have arrived separately and at a date, probably no more than 2,000 years ago. The North American climate was unstable as the ice age receded and it finally stabilized by about 10,000 years ago, climatic conditions were very similar to todays. Within this timeframe, roughly pertaining to the Archaic Period, numerous archaeological cultures have been identified, the unstable climate led to widespread migration, with early Paleo-Indians soon spreading throughout the Americas, diversifying into many hundreds of culturally distinct tribes. The paleo-indians were hunter-gatherers, likely characterized by small, mobile bands consisting of approximately 20 to 50 members of an extended family and these groups moved from place to place as preferred resources were depleted and new supplies were sought. During much of the Paleo-Indian period, bands are thought to have subsisted primarily through hunting now-extinct giant land animals such as mastodon, Paleo-Indian groups carried a variety of tools
Southwest Florida is the region along the south west gulf coast of the U. S. State of Florida. For some purposes, the counties of DeSoto and Hendry. The region includes four areas, the North Port-Bradenton-Sarasota MSA, the Cape Coral-Fort Myers MSA, the Naples-Marco Island MSA. The most populous county in the region is Lee County, inland counties are notably rural, with the primary economic driver being agriculture. Important products grown in area include tomatoes, sugarcane. Agricultural harvesting in Southwest Florida employs approximately 16,000 seasonal workers,90 percent of which are thought to be migrants, each county in the region has its own county government. Within each county, there are self-governing cities, the remaining majority of land in each county is controlled directly by the county government. It is common for incorporated municipalities to contract county services in order to save costs. The region is designated as one of Floridas 4 districts for the Committee of Southern Historic Preservation, the district has been represented by Tommy Stolly since 2013.
Southwest Florida is served by major highways, including the Tamiami Trail. Long-term cooperative infrastructure planning is coordinated by the Southwest Florida Regional Planning Council, and in heavily populated Lee County, greyhound Lines serves several locations in Southwest Florida, including Bradenton, Fort Myers, Port Charlotte, Punta Gorda and Sarasota. The areas secondary airport, Sarasota-Bradenton International Airport, served 1.34 million passengers in 2009, Seminole Gulf Railway provides freight services throughout Southwest Florida. Tourism is an economic driver in the area. In addition, many residents live in the area during the winter months. Armands Circle on St. Armands Key Attractions including and Ford Winter Estates in Fort Myers Lake Okeechobee renowned for fishing, Naples Botanical Garden Naples Zoo at Caribbean Gardens Brighton Seminole Indian Reservation where the Seminole nation operates a sizable casino. The university belongs to the 12-campus State University System of Florida, FGCU competes in the Atlantic Sun Conference in NCAA Division I sports.
The following table shows the teams and major NCAA Division 1 teams that play in Southwest Florida. Florida is the home for Major League Baseball spring training
Poverty Point is a prehistoric earthworks of the Poverty Point culture, now a U. S. National Monument and World Heritage Site located in the Southern United States. It is 15.5 miles from the current Mississippi River, Poverty Point comprises several earthworks and mounds built between 1650 and 700 BC, during the Archaic period in the Americas by a group of Native Americans of the Poverty Point culture. The culture extended 100 miles across the Mississippi Delta, the 910-acre site, which has been described as the largest and most complex Late Archaic earthwork occupation and ceremonial site yet found in North America is a registered National Monument. The monument was brought to the attention of archaeologists in the early 20th century, since then, various excavations have taken place at the site. Scholars have advanced various theories regarding the purpose of the site, including religious. Other writers have proposed pseudo-archaeological and New Age associations, the complex attracts many tourists as a destination.
Poverty Point is constructed entirely of earthworks, the core of the site measures approximately 500 acres, although archaeological investigations have shown that the total occupation area extended for more than three miles along the river terrace. The monumental construction is a group of six concentric, crescent ridge earthworks, the site has several mounds both on the outside and inside of the ring earthworks. The name Poverty Point came from the plantation which once surrounded the site, in January 2013, the United States Department of the Interior nominated Poverty Point for inclusion on the UNESCO World Heritage List. State Senator Francis C. Thompson of Delhi in Richland Parish said the matter is not just a local or even state issue of international importance, the prestige of having a World Heritage Site in our region and state would be of great significance both culturally and economically. The designation makes Poverty Point the first World Heritage Site in Louisiana, the main part of the monument is the six concentric curving earthworks located in the center of the site.
Each is separated from one another by a corridor of earth. Dividing the ridges into three sections are two ramps that slope inwardly, leading to Bayou Maçon, each of the ridge earthworks is about three feet high. Archaeologists believe they were five feet high, but have been worn down through agricultural ploughing over the last few centuries. The approximate diameter of the ridge is three-quarters of a mile, while the innermost ridge’s diameter is about three-eighths of a mile. Alongside these ridges are other earthworks, primarily platform mounds, the largest of these, Mound A, is to the west of the ridges, and is roughly T-shaped when viewed from above. Many have interpreted it as being in the shape of a bird and as an Earth island, researchers have learned that Mound A was constructed quickly, probably over a period of less than three months. Prior to construction, the covering the site was burned
Mean sea level is an average level of the surface of one or more of Earths oceans from which heights such as elevations may be measured. A common and relatively straightforward mean sea-level standard is the midpoint between a low and mean high tide at a particular location. Sea levels can be affected by factors and are known to have varied greatly over geological time scales. The careful measurement of variations in MSL can offer insights into ongoing climate change, the term above sea level generally refers to above mean sea level. Precise determination of a sea level is a difficult problem because of the many factors that affect sea level. Sea level varies quite a lot on several scales of time and this is because the sea is in constant motion, affected by the tides, atmospheric pressure, local gravitational differences, salinity and so forth. The easiest way this may be calculated is by selecting a location and calculating the mean sea level at that point, for example, a period of 19 years of hourly level observations may be averaged and used to determine the mean sea level at some measurement point.
One measures the values of MSL in respect to the land, hence a change in MSL can result from a real change in sea level, or from a change in the height of the land on which the tide gauge operates. In the UK, the Ordnance Datum is the sea level measured at Newlyn in Cornwall between 1915 and 1921. Prior to 1921, the datum was MSL at the Victoria Dock, in Hong Kong, mPD is a surveying term meaning metres above Principal Datum and refers to height of 1. 230m below the average sea level. In France, the Marégraphe in Marseilles measures continuously the sea level since 1883 and it is used for a part of continental Europe and main part of Africa as official sea level. Elsewhere in Europe vertical elevation references are made to the Amsterdam Peil elevation, satellite altimeters have been making precise measurements of sea level since the launch of TOPEX/Poseidon in 1992. A joint mission of NASA and CNES, TOPEX/Poseidon was followed by Jason-1 in 2001, height above mean sea level is the elevation or altitude of an object, relative to the average sea level datum.
It is used in aviation, where some heights are recorded and reported with respect to sea level, and in the atmospheric sciences. An alternative is to base height measurements on an ellipsoid of the entire Earth, in aviation, the ellipsoid known as World Geodetic System 84 is increasingly used to define heights, differences up to 100 metres exist between this ellipsoid height and mean tidal height. The alternative is to use a vertical datum such as NAVD88. When referring to geographic features such as mountains on a topographic map, the elevation of a mountain denotes the highest point or summit and is typically illustrated as a small circle on a topographic map with the AMSL height shown in metres, feet or both. In the rare case that a location is below sea level, for one such case, see Amsterdam Airport Schiphol
North America is a continent entirely within the Northern Hemisphere and almost all within the Western Hemisphere. It can be considered a subcontinent of the Americas. It is bordered to the north by the Arctic Ocean, to the east by the Atlantic Ocean, to the west and south by the Pacific Ocean, and to the southeast by South America and the Caribbean Sea. North America covers an area of about 24,709,000 square kilometers, about 16. 5% of the land area. North America is the third largest continent by area, following Asia and Africa, and the fourth by population after Asia and Europe. In 2013, its population was estimated at nearly 565 million people in 23 independent states, or about 7. 5% of the worlds population, North America was reached by its first human populations during the last glacial period, via crossing the Bering land bridge. The so-called Paleo-Indian period is taken to have lasted until about 10,000 years ago, the Classic stage spans roughly the 6th to 13th centuries. The Pre-Columbian era ended with the migrations and the arrival of European settlers during the Age of Discovery.
Present-day cultural and ethnic patterns reflect different kind of interactions between European colonists, indigenous peoples, African slaves and their descendants, European influences are strongest in the northern parts of the continent while indigenous and African influences are relatively stronger in the south. Because of the history of colonialism, most North Americans speak English, Spanish or French, the Americas are usually accepted as having been named after the Italian explorer Amerigo Vespucci by the German cartographers Martin Waldseemüller and Matthias Ringmann. Vespucci, who explored South America between 1497 and 1502, was the first European to suggest that the Americas were not the East Indies, but a different landmass previously unknown by Europeans. In 1507, Waldseemüller produced a map, in which he placed the word America on the continent of South America. He explained the rationale for the name in the accompanying book Cosmographiae Introductio, for Waldseemüller, no one should object to the naming of the land after its discoverer.
He used the Latinized version of Vespuccis name, but in its feminine form America, following the examples of Europa and Africa. Later, other mapmakers extended the name America to the continent, In 1538. Some argue that the convention is to use the surname for naming discoveries except in the case of royalty, a minutely explored belief that has been advanced is that America was named for a Spanish sailor bearing the ancient Visigothic name of Amairick. Another is that the name is rooted in a Native American language, the term North America maintains various definitions in accordance with location and context. In Canadian English, North America may be used to refer to the United States, usage sometimes includes Greenland and Mexico, as well as offshore islands
St. Lucie County, Florida
St. Lucie County is a county located in the state of Florida, in the United States. As of the 2010 census, the population was 277,789, the county seat is Fort Pierce. As of the 2015 Census Estimate, St. Lucie County is at a population of 298,563. St. Lucie County is included in the Port St. Lucie, FL Metropolitan Statistical Area, the area was originally inhabited by the Ais tribe, a hunter-gatherer culture whose territory extended from south of the St. Johns river to the St. Lucie Inlet. Spanish explorers frequently encountered the tribe as the Spanish treasure routes ran parallel in order to take advantage of the strong Gulfstream current. The area was given names by the Spanish including Rio de Ays as well as Santa Lucia. The fabled 1715 Spanish treasure fleet sank off the area that is now St. Lucie County, during the early 19th century, the Spanish government issued several land grants in the area, one of which went to settler James Hutchinson. The grant contained 2,000 acres and today the barrier island Hutchinson Island still retains his name, during the mid-1800s, Seminoles and runaway slaves sought refuge in the virtually uninhabited area.
By 1837 the Second Seminole war had broken out in Florida, in December 1837, a group of soldiers under the command of Lt. Colonel Benjamin K. Pierce sailed down the Indian River and established a fort, naming it after their commander. Today the county seat of St. Lucie County is still known as Fort Pierce, in 1841, the United States government began issuing land grants under the Armed Occupation Act to Americans who were willing to settle the area. Several of these grants were within the boundaries of todays St. Lucie County, the Third Seminole War in 1851 saw the building of a second major American fort in the area, Fort Capron, located in the area that is todays St. Lucie Village. From this point on the area became more populated as settlers ventured down for health. The Flagler railroad reached the area in the 1890s, major industries at the end of the 19th century in the area included pineapple and seafood canning and cattle. Citrus would not become a crop until the early 1900s. The city of Fort Pierce was chartered in 1901, up until 1905 the area had been under Brevard County.
During the summer of 1905, St. Lucie County was created from the part of Brevard County with the county seat being at Fort Pierce. Other settlements at the time in St. Lucie Countys boundaries included Jensen, Anknona, Eldred, White City, Viking, St. Lucie, Vero, Quay and others. Much of western St Lucie County had already gone in 1917 to form Okeechobee County, the 1920s saw increased land speculation and planned developments such as Indrio and San Lucie that never came to fruition due to the bust in 1929
A basket is a container which is traditionally constructed from stiff fibers, which can be made from a range of materials, including wood splints and cane. While most baskets are made from plant materials, other such as horsehair, baleen. Baskets are generally woven by hand, some baskets are fitted with a lid, others are left open. Baskets serve utilitarian as well as aesthetic purposes, some baskets are ceremonial, that is religious, in nature. Prior to the invention of woven baskets, people used tree bark to make simple containers and these containers could be used to transport gathered food and other items, but crumble after only a few uses. Weaving strips of bark or other plant material to support the bark containers would be the next step, the last innovation appears to be baskets so tightly woven that they could hold water. Depending on soil conditions, baskets may or may not be preserved in the archaeological record, sites in the Middle East show that weaving techniques were used to make mats and possibly baskets, circa 8000 BCE.
Twined baskets date back to 7000 BCE in Oasisamerica, baskets made with interwoven techniques were common at 3000 BCE. Baskets were originally designed as multi-purpose baskets to carry and store, the plant life available in a region affects the choice of material, which in turn influences the weaving technique. The practice of basket making has evolved into an art, artistic freedom allows basket makers a wide choice of colors, sizes and details. The carrying of a basket on the head, particularly by women, has long been practised. Representations of this in Ancient Greek art are called Canephorae, the phrase to hell in a handbasket means to rapidly deteriorate. The origin of use is unclear. Basket is sometimes used as an adjective towards a person who is out of wedlock. This occurs more commonly in British English, basket refers to a bulge in a mans crotch. Materials have been used by basket makers, Wicker Straw Plastic Metal Bamboo Palm Zepeda, ocean Power, Poems from the Desert. Baskets, The Womens Committee of the Philadelphia Museum of Art
A glacial period is an interval of time within an ice age that is marked by colder temperatures and glacier advances. Interglacials, on the hand, are periods of warmer climate between glacial periods. The last glacial period ended about 15,000 years ago, the Holocene epoch is the current interglacial. A time when there are no glaciers on Earth is considered a greenhouse climate state, within the Quaternary glaciation, there have been a number of glacials and interglacials. The last glacial period was the most recent glacial period within the current ice age, occurring in the Pleistocene epoch, the glacial advance reached its maximum extent about 18,000 BP. In Europe, the ice sheet reached northern Germany, since orbital variations are predictable, computer models that relate orbital variations to climate can predict future climate possibilities. Work by Berger and Loutre suggests that the current warm climate may last another 50,000 years
The term Woodland Period was introduced in the 1930s as a generic term for prehistoric sites falling between the Archaic hunter-gatherers and the agriculturalist Mississippian cultures. The Eastern Woodlands cultural region covers what is now eastern Canada south of the Subarctic region and this period is variously considered a developmental stage, a time period, a suite of technological adaptations or traits, and a family tree of cultures related to earlier Archaic cultures. Many Woodland peoples used spears and atlatls until the end of the period, the most cited technological distinction of this period was the widespread use of pottery, and the diversification of pottery forms and manufacturing practices. Intensive agriculture characterizes the Mississippian period from ca, 1000-1400 CE and may have continued up to European contact, around 500 years ago. Eastern Woodlands lived in wigwams and longhouses, clay for pottery was typically tempered with grit or limestone. Pots were usually made in a conoidal or conical jars with rounded shoulders, slightly constricted necks, pots were coiled and paddled entirely by hand without the use of fast rotation such as a pottery wheel.
Some were slipped or brushed with red ochre, pottery and permanent settlements have often been thought of the three defining characteristics of the Woodland period. Nevertheless, these sites were typical Archaic settlements, differing only in the use of basic ceramic technology. Most of these are evident in the Southeastern United States by 1000 BCE, in some areas, like South Carolina and coastal Georgia, Deptford culture pottery manufacture ceased after ca.700 CE. In coastal regions, many settlements were near the coast, often near salt marshes, people tended to settle along rivers and lakes in both coastal and interior regions for maximum access to food resources. Most groups relied heavily on white-tailed deer, but a variety of small and large mammals were hunted also, including beaver, raccoon. Shellfish formed an important part of the diet, attested to by numerous shell middens along the coast, seasonal foraging characterized the strategies of many interior populations, with groups moving strategically among dense resource areas.
Recently evidence has accumulated of a reliance of woodland peoples on cultivation in this period, at least in some localities. This is especially true for the middle period and perhaps beyond. C. Margaret Scarry states in the Woodland periods, people diversified their use of plant foods, increased their consumption of starchy foods. They did so, however, by cultivating starchy seeds rather than by gathering more acorns and Yarnell refer to an indigenous crop complex as early as 3800 B. P. in parts of the region. The beginning of the Middle Woodland saw a shift of settlement to the Interior, as the Woodland period progressed and inter-regional trade of exotic materials greatly increased to the point where a trade network covered most of the Eastern United States. Throughout the Southeast and north of the Ohio River, burial mounds of important people were very elaborate and contained a variety of mortuary gifts, the most archaeologically certifiable sites of burial during this time were in Illinois and Ohio
Deer are the ruminant mammals forming the family Cervidae. The two main groups are the Cervinae, including the muntjac, the deer and the chital, and the Capreolinae, including the elk, the Western roe deer. Female reindeer, and male deer of all species, grow, in this they differ from permanently horned antelope, which are in the same order, Artiodactyla. The musk deer of Asia and water chevrotain of tropical African and Asian forests are not usually regarded as true deer and form their own families and Tragulidae, respectively. Deer appear in art from Palaeolithic cave paintings onwards, and they have played a role in mythology and their economic importance includes the use of their meat as venison, their skins as soft, strong buckskin, and their antlers as handles for knives. Deer hunting has been a sport since at least the Middle Ages. Deer live in a variety of biomes, ranging from tundra to the tropical rainforest, while often associated with forests, many deer are ecotone species that live in transitional areas between forests and thickets and prairie and savanna.
The majority of deer species inhabit temperate mixed deciduous forest, mountain mixed coniferous forest, tropical seasonal/dry forest. Clearing open areas within forests to some extent may actually benefit deer populations by exposing the understory and allowing the types of grasses, additionally, access to adjacent croplands may benefit deer. However, adequate forest or brush cover must still be provided for populations to grow, fallow deer have been introduced to South Africa. There are species of deer that are highly specialized, and live almost exclusively in mountains, swamps. Some deer have a distribution in both North America and Eurasia. Examples include the caribou that live in Arctic tundra and taiga and moose that inhabit taiga, huemul deer of South Americas Andes fill the ecological niches of the ibex and wild goat, with the fawns behaving more like goat kids. Mountain slope habitats vary from moist coniferous/mixed forested habitats to dry forests with alpine meadows higher up. The foothills and river valleys between the mountain provide a mosaic of cropland and deciduous parklands.
The rare woodland caribou have the most restricted range living at altitudes in the subalpine meadows. Elk and mule deer both migrate between the alpine meadows and lower coniferous forests and tend to be most common in this region, elk inhabit river valley bottomlands, which they share with White-tailed deer. They live in the aspen parklands north of Calgary and Edmonton, the adjacent Great Plains grassland habitats are left to herds of elk, American bison, and pronghorn antelope
Shell rings are archaeological sites with curved shell middens completely or partially surrounding a clear space. The rings were sited next to estuaries that supported large populations of shellfish, Shell rings have been reported in several countries, including Colombia, Peru and the southeastern United States. Archaeologists continue to debate the origins and use of shell rings, across what is now the southeastern United States, starting around 4000 BCE, people exploited wetland resources, creating large shell middens. Middens developed along rivers, but there is limited evidence of Archaic peoples along coastlines prior to 3000 BCE, Archaic sites on the coast may have been inundated by rising sea levels. Starting around 3000 BCE evidence of exploitation of oysters appears. During the period 3000 BCE to 1000 BCE shell rings, large shell middens more or less surrounding open centers and these shell rings are numerous in South Carolina and Georgia, but are found scattered around the Florida peninsula.
Some sites have sand or sand-and-shell mounds associated with shell rings, Sites such as Horrs Island, in southwest Florida, supported sizable mound-building communities year-round. Four shell and/or sand mounds on Horrs Island have been dated to between 4870 and 4270 Before Present, groups living along the coast had become mostly sedentary by the Late Archaic period, living in permanent villages while making occasional foraging trips. Archaeologists have debated whether the shell rings resulted from the accumulation of middens in conjunction with circular villages. Sites in Colombia and Japan, as well as in the southeastern United States, have identified as shell rings. Residents of and visitors to the Sea Islands of South Carolina, the first written accounts of shell rings in South Carolina and Georgia appeared early in the 19th century. Archaeologists surveyed some shell rings near the end of the 19th century, scientific excavation of shell mounds in Japan began in the 1920s. About 60 shell rings had been identified in the southeastern United States by 2002, most date from the Late Archaic period, but shell rings were constructed during the Woodland and Mississippian periods.
Close to 100 circular and horseshoe-shaped shell mounds have been identified in the Kantō region of Japan, Shell rings in Japan have been dated from late in the Early Jōmon period until early in the Late Jōmon period. While there are reports of a number of shell ring sites in Colombia, archaeologists have continued to identify and investigate additional shell ring sites into the 21st century. Shell rings in the United States may form a ring, or be open, C-shaped. They may form a perfect circle, or an oval. In almost all cases, the area or plaza contains little or no shell or occupational debris
|
Student Robotics: A Fourth-grade Student Explores Virtual Robots with VoxCAD
With open source software and guided directions from Science Buddies, students can explore the ways in which robotics engineers test designs before choosing which designs to prototype. This student put her own robots to the test—on her computer—and walked away with a blue ribbon at a local fair.With many schools offering extracurricular or after-school robotics clubs and programs, more and more students are exploring robotics engineering. Hands-on projects like building an ArtBot or BristleBot make it easy for families to tackle a robotics building activity at home with fairly easy-to-come-by supplies like toothbrush heads, coin cell batteries, and plastic cups.
While making a cute bot that shuttles about on toothbrush bristles can be empowering and rewarding for kids, designing effective robots involves more than just the mechanics of assembly. Being able to test different approaches to a robot design or its materials before investing time and money in building offers many advantages for engineers. If the goal is to create a robot creature that can move quickly from Point A to Point B, which design will work best?
Building three different working models, each with different approaches to mobility, is not always a practical approach given issues of time, materials, and money. If, instead, an engineer can do some preliminary testing and gauge the benefits or drawbacks of various design options, she may be able to save time and money and invest energy working on the design that shows the most promise for a given challenge or need. One approach to evaluating designs involves using computer software, like VoxCAD, to simulate various designs and conditions. VoxCAD is an open source, cross-platform physics simulation tool originally developed by Jonathan Hiller in the Creative Machines Lab at Cornell University.
In the field, using simulation software can be an important pre-build and testing step for robotics engineers. In the classroom, simulation software allows students to explore robotics without prior engineering experience. With a suite of VoxCAD Project Ideas at Science Buddies, students can experiment with robotics engineering at the virtual level, no circuits, batteries, or soldering required.
"The neat thing about VoxCAD is that kids can jump straight in to the deep end," says Dr. Sandra Slutz, Lead Staff Scientist at Science Buddies. "It takes a lot of mechanical engineering, electronics, and even programming know-how to create robots with different mobility strategies, but using VoxCAD, a student whose curiosity is sparked can start designing those robots in just a few minutes without all the time it takes to develop those skills."
With robotics simulation, exploring robotics and comparing designs doesn't require building multiple robots. Instead, students can get started right at their computers. After mocking up, visualizing, and testing their three dimensional ideas using VoxCAD, students who want to learn more about hands-on robotics engineering can explore circuit-based robot building projects in the robotics area at Science Buddies and move from virtual to real-world robotics design, building, and engineering.
Thinking 3D: Student Robotics
Laura was in 4th grade when her mom showed her a new VoxCAD project at Science Buddies. Laura, who wants to be a website developer in the future, was fascinated by the idea of designing three-dimensional robots and decided to give the introductory "Robot Race! Use a Computer to Design, Simulate, & Race Robots with VoxCAD" project a try.
"My mom showed me a VoxCAD video, and I became attached," says Laura. "I liked the way the creatures moved. I thought that was very interesting that the computer was able to bring them to life. And I wanted to learn how to do that."
When her mom told her about VoxCAD, Laura didn't have a science project assignment due. She chose to experiment with VoxCAD on her own. "I just thought it looked fun and wanted to try it," says the budding engineer, noting that robotics engineering wasn't an area of science she was already interested in or had explored before.
Laura enjoyed working with VoxCAD and trying different robot designs. In a video she created to accompany her project, Laura describes the movement of each design as the three-dimensional block-based robots move around on screen. She refers to the three models she created and tested as the "fastman snail," the "shimmier," and the "sidewinder," and her testing shows clear differences in the effectiveness of each. Using VoxCAD, she was able to bring the three robot designs to life on the screen and put them in motion to see how they would move and which would move farthest.
The best part of the experience, says Laura, was watching her creations move in the VoxCAD Physics Sandbox. "I learned to think in 3D," she adds. After finishing her project, Laura entered it in the science division of the Alameda County Fair where she won a first prize blue ribbon.
Congratulations to Laura!
To learn more about VoxCAD and to experiment with your own three-dimensional robot design and testing, see the following Project Ideas:
- Robot Race! Use a Computer to Design, Simulate, & Race Robots with VoxCAD
- Hard, Soft, or in Between? Changing Material Properties in Robot Design with VoxCAD *
- The World Is Your (Physics) Sandbox: Changing the Settings in a Robot Simulation with VoxCAD *
- Eco-Friendly Robots: Design the Most Energy-Efficient Racing Robot Using VoxCAD *
For suggestions about family robotics projects and activities and ways to engage your students with introductory robotics exploration, see: Bot Building for Kids and Their Parents: Celebrating Student Robotics, Create a Carnival of Robot Critters this Summer, Robot Engineering: Tapping the Artist within the Bot, and Family Robotics: Toothbrush Bots that Follow the Light.
Today, February 20, 2014 is Girl Day, part of Engineers Week. Don't miss the chance to make a difference in a student's life and future by taking the opportunity to introduce students to the world of engineering today and every day.
You Might Also Enjoy these Previous Entries:
- Challenges Don’t Stop this School from Making STEM Hands-on
- Free Resources Support Teacher's Innovative STEM Academy
- Student Science Project Success Story: Planaria Regeneration
- Science Teacher Success Story: Hands-on with the Follow the Flow Engineering Challenge
- Science Project Success Story: Making paper speakers after school
- Paper Rockets at the Library
- Kit Club Delivers Hands-on Science to Schools in Need
- Science Buddies, This Teacher's Go-to Resource for Science Projects
Explore Our Science Videos
How to Make an Archimedes Screw - STEM Activity
Physics and Chemistry of an Explosion Science Fair Project Idea
How to make an anemometer (wind speed meter)
|
When the Irish immigrated to the United States in 1850 after the great potatoes famine in Ireland, the Irish natives were poor and without money, although prejudice did not seem to affect the Irish they were subjected to prejudice and segregation. Because the Irish fit in with the white race upon entry to the United States they were not discriminated against like the African Americans and Asian immigrants who were often denied entry into the United States because of their color and ethnic characteristics.
However the Irish were poor and forced to live in the filthiest neighborhoods and alleys most lived in basement or apartments that were not properly ventilated and damaged by sewage. The social status of the Irish forced them to take job that were often dangerous like building railroad, these people were forced to take these jobs because no employer would give an Irish man or women a decent job. At this time in history cites needed hard manual laborers because the Irish were unskilled and poor they worked for the lower wages other ethnic groups would not.
People were threatened by the Irish because of their hard working ethnics and because of their catholic religion signs for employment would often say “Irish need not apply. ” (Hy Kinsella, 1996-2010. para3. ) Catholic Churches were often burnt down and riots occurred protesting Irish Immigrants, America in the 1850’s recognized the Irish as poor, filthy criminal who would work for pennies, many feared their upward movement in society, but eventually the Irish overcame the new world that showed then so much prejudice and discrimination.
After entering the county the Irish were not only affected by poverty and prejudice other events also plagued the Irish but some things moved the Irish up in society. The dual labor market affect the Irish, because employers were not willing to give uneducated and unskilled people… During the 1800’s the Irish began arriving in the United States. In the 1820s there were 5 million Irish immigrants living in the United States. By the 1840s, almost half of all immigrants residing in the United States were Irish and only one-third by the 1850s (Kenny, 2008).
The reception of the Irish from the native-born Americans was not one of warmth and acceptance. Fleeing Ireland was a matter of life and death for some. The quest for a better life was hindered by the “unwelcome” mat placed before them when they arrived (The History Place). During 1845 – 1849 was a period known as “The Great Famine” or “Great Hunger” in Ireland (University College Cork, Ireland). The potato, a main staple on which more than one-third of the Irish population relied upon to survive, was overcome by a fungus known today as “potato blight. Between 1846 and 1851 over 1 million Irish died of starvation and various hunger-related infectious diseases. Many of those deaths were of the poor. It was believed that the Ireland’s Government had abandoned the people by not helping the hungry, yet continuing the exportation of food (University College Cork, Ireland). The Irish entered the United States through various routes. Some took the expensive US ships to Boston and some gained access by walking over the border into New York from Canada (University College Cork, Ireland).
It was mostly poor refugees who were fleeing their famine stricken homeland and their slums of Ireland to come to America, only to face prejudice, discrimination, and hostile American nativists. (The History Place). Forced to live in basements, cellars, or one-room apartments, the Irish lived in their own section of each town, often referred to as Irish slums (The History Place). Landlords victimized the Irish settlers by charging $1. 50 a week for a small room. Single family homes were sub-divided into nine-by-eleven foot rooms with no water,… Remember, remember always, that all of us… are descended from immigrants and revolutionists. Franklin D. Roosevelt] Other factors that increased and reinforced this inflow were the decline in the birthrate as well as an increase in industry and urbanization in the United States. The United States, in the 19th Century, remained a strong magnet to immigrants, with offers of jobs and land for farms. Earlier immigrants considered that in America, the streets were, “paved with gold,” and at the same time as well as offerings of religious and political freedom. A German immigrant to Missouri wrote home about: “[The] abundance of overbearing soldiers, haughty clergymen, and inquisitive tax collectors… During the years 1890-1924 the reasons for the immigration had a change from the past trends. The kinds of immigrants also changed. Jews came for religious freedom, Italians and Asians came for work and Russians came to escape persecution from the powers in their home country. The reason that America had jobs was still prevalent in this period. America had religious freedom for the many people around those who were facing tyrannical situations from their respective countries. All these reasons were the cornerstone of the fact that America was called the “Land of Opportunities”. The Immigrants to the U. S. uring 1870 – 1920s There were mainly the Irish and British who immigrated to America during this time period. The circumstances in which the Irish immigrated to America were quite different from those of the British and they also differed in their impact on the U. S. One of the reason why the Irish immigrated to U. S was the potato famine that killed over a million. Apart from the famine conditions, the Irish were tired of the British rule in their country. The ordinary Irishman was under the tyrannical control of the British landlords. Ireland was a country of prolonged depression and social hardship during this period.
Ireland was so ravaged by economic collapse that in rural areas, the average age of death was 19. Miller, Mulholland & Patricia show through intimate letters, journals, and diaries of actual immigrants, in the Journey of Hope how the Irish in America and their triumphant rise from adversity and prejudice to prosperity and prominence. The social class of majority of the Irish immigrants was tenant farmers. They did not have any expertise for farm work and were quite poor to but any land for themselves in America. They aged from teenager to young adult mainly from the Roman Catholic.
The second largest inflow into America was from the British. The British immigrated to America for various reasons. Mostly professionals, independent farmers, and skilled workers, the British came to simply look for better opportunities of work. Most immigrants from Britain were fairly young and Protestant. Cinel has noticed that return migration to Italy from the United States from 1870 to 1929 took place. A large number of Italians did not intend to settle permanently in the United States. Rather, they immigrated temporarily to the United States to make money in order to buy land in Italy.
Order Your Custom Term Papers, College Essays and Research Papers After the Civil War of 1861, the immigration agents went to Europe to enlist recruits for the American Industrial army. In 1864 they legalized contracts by which immigrants pledged the wages of their labor for a term not to exceed twelve months to repay expenses of their journey to the U. S. This and other such moves were made to encourage the immigrants. But the year 1868 saw a repeal of the law. Even under the repeal of the law, the American Emigrant Company still imported laborers until 1865 when Congress made it unlawful.
The immigrants who expected great work opportunities on American soil were also looked upon by the politicians as potential voters. There were classes of immigrants who were considered “voting cattle”. These cattle were used by their bosses to milk the plant of politics. Although the immigrant vote did not seriously affect the outcome of elections but it gave rise to a serious debate over the rights and interests of the immigrants. Telushkin states that the Jews first arrived to New Amsterdam in 1654 and then to the Lower East Side in the early 20th century. It was the diversity of the immigrants that J.
Hector St. Josh de Crevecouer said, “What, then, is this new man, the American? They are a mixture of English, Scotch, Irish, French, Dutch, Germans, and Swedes. From this promiscuous breed, that race, now called Americans, have arisen. ” Unique Characteristics of America for the immigrants The main reason why most of the immigrants migrated was the tyrannical situations in their homelands. America was attractive because of the fact that young men were not forced to serve long years in the army. The immigrants had a genuine likeness for the land of opportunities, the U.
S, since they could achieve what their parents could not. They also found in America, a place where they could do whatever they wanted in the ideas of religion and politics. Laws Restricting Immigration There was a propaganda spread in the favor of laws restricting immigration as a means of protecting the American wage earned. The restrictions on the immigration started coming in from 1875. The first restriction of immigration was that of prostitutes and felons. In 1882 the government reacted to the anti-immigrant feelings and made more restrictions barring the insane, the retarded, and people likely to need public care.
In 1892 the U. S further restricted the immigration of convicts, polygamists, prostitutes, people suffering from diseases, and people liable to public charges. Another reason that the leaders might have judged in those olden times is pointed out by Patrick J. Buchanan in his work. According to the author in The Death of the West is that the United States is no longer a healthy melting pot, but instead a confused, tottering “conglomeration of peoples with almost nothing in common. ” BIBLIOGRAPHY “America must be kept American” Quoted. President Coolidge signing Immigration Quota Law in 1924
Quote on immigrant’s importance by Franklin D. Roosevelt The National Integration of Italian Return Migration, 1870-1929 by Dino Cinel The Death of the West: How Dying Populations and Immigrant Invasions Imperil Our Country and Civilization by Patrick J. Buchanan The Golden Land: The Story of Jewish Immigration to America: An Interactive History With Removable Documents and Artifacts by Joseph Telushkin Journey of Hope: The Story of Irish Immigration to America by Kerby Miller, Patricia Mulholland Miller, Patricia Muholland Miller Quote by J. Hector St. Josh de Crevecouer
I chose the Italin ethnicity as the ethnic group that I feel like I am most related to. I researched determined that the itialins immigrated to the United States. I would her my great aunts and uncles talking about our family “coming over on the boat” and I was so young I never really understood that saying until I was in school and learned about those type things. The Irish immigrated to the United States of America with promises of a better life. That was not the case upon arrival for the Irish settlers. They faced prejudice, segregation, and many other forms of discrimination.
Their treatment was very poor and unwelcoming to say the least. The moment they stepped off the ships from Ireland, they were segregated into the most impoverished areas to seek shelter in slums and attempted to fit their entire families into rooms no bigger than today’s average bedroom. As a group, the Irish were shunned and turned away from many job opportunities being confronted by signs which stated “Irish need not apply”. Being that the British still dominated the “New World”, the Irish was also prosecuted because of their Catholic religion.
The Irish Americans were subjected to a dual labor market. During the late 1800’s, after the first large Irish immigration into America, Irish immigrants were considered to be the poorest of all the immigrants coming into the United States. Because of the constant prejudice against Irish, they were kept at this poor standing by only being offered the lowest paying, and the most backbreaking jobs available, leaving the higher paying jobs for natural American citizens. “During the 1850’s there was no group who seemed lower than the Irish.
Some of this was due to poverty but the Irish were also considered bad for the neighborhood. The term Redlining did not come into use until after the Fair Housing Act of 1934. During the 1800’s you could easily say the Irish were redlined. During the mid-1850’s there was the Know Nothing movement. This movement was designed to keep Irish Catholics from holding public office; the opposition was by Irish Protestants. ” (Kinsella, 1996) http://www. squidoo. com/irish-history-and-immigration-to-the-united-states Home » Culture ; Society » History Irish History and Immigration to the United States
Ranked #3,915 in Culture ; Society, #82,887 overall Through Their Eyes Shared, first-hand liver cancer survivor stories through their eyes CancerCenter. com/CareThatNeverQuits Grants for Women You May Qualify for Grants to Earn a Degree Online. Search Schools. EducationConnection. com/GrantsInfo UNICEF USA Monthly Giving Pledge Your Monthly Support and Help UNICEF Save Children’s Lives! www. unicefusa. org Explore Irish History and Cultural Values This page is an exploration of Irish culture and history, particularly in relation to the immigration to the United states.
It includes documentaries about the motivating factors behind many Irish people immigrating to America during a relatively short period of time. This page focuses on some of the struggles that faced many immigrants, and how that has become a part of the American melting pot of cultures. The Irish people faced extreme difficulties moving across the ocean and setting up new lives in a new land. Despite many challenges, immigrants to the United States from Ireland and their descendants have made a rich and positive impact on US culture at large.
Using both academic resources in cultural anthropology as well as entertaining and informative documentaries and music videos, you will find this lens interesting if you are doing research about Irish history. This lens contains many links and resources of interest to anyone doing genealogy research about Irish Americans. It’s for Irish Folks, College and High School students writing papers, and anyone else interested. If you have related information or links you would like me to consider adding, feel free to comment or send me a message! 🙂 The image of the crowned harp is an Irish symbol I found on Wikimedia commons, attributed to Thomas Gun. Important! Economic and religious factors were the primary reasons for mass immigration from Ireland to the united states, and the potato famine was an additional significant factor that helped trigger the sense of urgency to make the journey to America. Videos About Immigration to the United States from Ireland Over only a decade which followed the famine in the mid 1800s, more than a quarter of the Irish population left their homelands and relocated to the US, and many more followed.
Despite hardships they made many great contributions to society in the U. S. . Irish Immigration by dudeguy006 | video info 65 ratings | 26,284 views A brief documentary on Irish Immigration to America, Reasons for leaving, life upon arrival, cultural contributions, and current immigration. curated content from YouTube The Primary Factors the Motivated Irish Immigration to the United States Desperate economic factors in Ireland made employment opportunities in a new land look golden. Religious persecution towards the Roman Catholic majority of Irish citizens inspired a desire for religious freedom and acceptance.
Political unrest in Ireland made American democracy look attractive to Irish immigrants who hoped for a more fair political system in the US. The possibilities for the common man to become a land owner seemed more promising in the US to Irish immigrants. The Impact of The Potato Famine This single event triggered a mass imigration from Ireland to the United States. Poverty to a level of desperation and starvation became worse and worse during the 19th century in Ireland, which began to motivate large waves of immigration from Ireland to the united states in the mid 1800s.
These economic conditions in Ireland were the result of a variety of factors, most importantly political domination by Britain and a dependency on one significant crop, the production of potatoes. The potato had become increasingly popular and created some significant population growth dispute political unrest and religious persecution. Potatoes became the center of Irish agriculture, because it was discovered that about twice as many potatoes could be grown in the same sized area as other crops might have been planted in.
This allowed for the production of a healthy amount of food for the farmers, and a surplus that could be used as an economic asset. By 1830 35% of the irish population depended on the potato harvest, both as their primary food source and their source of work. This economic dependence on a particular crop led to a collapse in the economy of Ireland during the Potato Famine. This blight on the Irish potato crops was caused by an airborne fungus which caused the potatoes to become diseased. In September 1844 the potato crops were first discovered to be infected. By 1854 ? of Irish population had immigrated to the United States.
This wave of mass immigration was given its sense of urgency by the potato famine; with underlying factors of poverty, religious persecution and political unrest. America must have seemed to faraway immigrants leaving from Ireland as a true chance at a decent life. It was believed that they could find good work in the United States, although arriving immigrants found it difficult to gain employment in many fields due to cultural prejudices. Still, in a land with no jobs and no food, the possibility of any hope in a new land seemed to many better than suffering the circumstances of life in Ireland at the time.
It was believed that the common man had better possibilities not only in terms of gainful employment, but as future land owners. The United states was also seen as a place of religious freedom, and many immigrants left hoping to create a better life for themselves by escaping religious intolerance and persecution. Democracy, freedom of speech, and religious tolerance were factors that went beyond the purely economic I motivating the search for a new home in a new land. . Scholarly articles and Research about Irish Immigration The American Wake
Immigrants leaving Ireland for the United states knew that they would probably never see their families or homeland again. The Irish Potato Famine One of the most signifigant social conditions in Ireland which increased the need for Irish citizens to seek a better future in another land. An Artice on Irish Imigration to the US on associated content Irish Famine A LOT of information The Journey to America A sholarly discussion on Irish Immigration. The Severity of the Famine was Devistating, and Could Have been Entirely Avoided The devastation of the potato famine is almost indescribable.
There had been eight million people in Ireland at the time farmers began to discover that all but ten percent of their food crops had been infected. Most of their primary food source was simply gone, and by 1847 more than half of the population was entirely reliant on this crop. Soon, about three million people became dependent on government run soup kitchens for food. people began to starve to death. Because poverty was so severe, many of the families of the deceased could not afford to bury their loved ones in coffins, and so they were laid to rest in shallow graves.
The situation was so severe that countries all around the world heard of the plight of the Irish, and began to send aid. In a day and age without the means to communicate quickly over great distances, far away places like Barbados, Jamaica, Italy, and France began to get word, and tried to help the starving people. The gifts were many and generous, including over 200,000 pounds from the Quakers alone. With so much of the population affected by the potato famine, the donations were only able to go so far. People ate stale bread, and a little soup, if anything.
An eighth of the population actually slowly starved to death, and not all nations were as kind and generous. Britain had political domination over Ireland. Absentee landlords from England owned much of the land that the Irish people lived and worked on. Rents were high, wages were low, and a significant portion of the crops were ‘money crops’ and belonged to the absent landlords. The most terrible and ironic fact about the potato famine is that during the blight Ireland still grew and was compelled to ship out enough food that the food needs of the whole country could have been covered by it.
While the country was so desperately poor and without food, some ships cam filled with supply, but even more left with the meager good portions of the crops. This is one of the major factors in the animosity between the Irish and the English. Religious persecution had been a significant factor in the poverty and living conditions of the Irish, and was used as a form of political domination. Roman Catholics were forbidden by English law to do many things that might make them more able to become self-sufficient and rise up against the protestants who had allied themselves with the British.
To retain economic control, the British contrived laws meant to keep the majority of the Irish people who were Roman Catholic from improving their lot. Roman Catholics were forbidden to read and write, or educate their children with any more skills than necessary to perform the laborious jobs the dominating overloads expected. Many of them chose to educate themselves in secret, at great risk. There are some letters from that time which survive, and those that took the risk to attempt to write and send them have created some surviving historical documentation the presents a bleak picture.
The people were impoverished to begin with, and so the effects of the famine were disastrous. Many families had been struggling to pay their high rents to begin with, and had to go without many things to continue to have shelter. They had to kill what livestock they had for food, or sell it to come up with money for rent. There clothing was tattered, and poor protection during the colder months. When their crops failed, they often looked for more laborious jobs in work houses and on larger farms. In a weakened physical condition from lack of food, many became sick and unable to perform heavy labor.
This led many of the people to become homeless. They were often evicted when they became unable to pay their rent, and often under dramatic circumstances. Sometimes the landlord would pull them out of their homes and destroy the house in front of them. These things happened even at the height of the potato famine, when many of the people forcefully thrown out of their homes were already starving and sick. Not only did the majority of the Irish suffer starvation and sickness, but in desperation many were convicted of small crimes such as poaching or stealing food from storehouses.
This was treated as a very serious offense, without much leniency or understanding for the starving people who were being denied basic human rights. As a result, many of those convicted of these “crimes” were convicted and sent forcefully to Australia to do hard labor in prison camps. Most of those who were separated and sent away on Australian bound convict ships never saw Ireland or their families again. The English might have moved to aid faster as other countries did, but were reluctant. Not only were they importing food from a starving country, they declined to give much assistance for the people growing the food.
It was believed that if they gave the Irish money, they would use it to buy weapons and revolt. The idea of providing them free food out of soup kitchens was also not popular with the English, who were concerned that they would become accustomed to the free food and become lazy and overly dependent. All the while, the Irish peasant farmers carried carts full of potatoes to be collected for the British, pulling them by hand without the aid of livestock. They suffered hard labor with little or nothing to eat, and had to deliver food to others while watching their families and their animals slowly starve to death.
It was in this atmosphere that many chose to leave for other countries, knowing they would probably never see their families or their homeland again. . Videos About the Potato Famine in Ireland Warning, this stuff may make you weep, particularly the third video with the letters from the young Irish girl about the famine. Irish Famine film by worcesterjonny | video info 123 ratings | 69,939 views A short film produced by Pathe News around 1905 that brought attention to famine in Ireland in that year.
The film has been altered and is used to draw similarities to the early famine of 1846-50. curated content from YouTube The American Wake An unusual tradition known as “the American Wake” happened daily across Ireland and continued for about 75 years. It was a somber fair well, among friends and loved ones before embarking on the journey of immigration across the ocean. Often more of a funeral than a celebration, the wake was held in order for adult children who were leaving to be able to mourn their parents funerals while they were still living.
The emigrant who was leaving would have visited friends and relatives prior to the wake, to tell the news of their plan for departure. All who were close would come the night before the immigrants departure, to say final goodbyes knowing that they would probably never see each other again. On the night of the wake, relatives and friends would spend time trying to impart their wisdom to the immigrant. They hoped life in the new land would be better for the person who was departing, but knew that the journey was risky and becoming established with little resources would be hard.
Elder relatives took this moment to advise the emigrant, many of who were fairly young, on how to survive and make a life for themselves. In the most impoverished areas food sharing and refreshments were not offered, but a small amount of poteen might be brought and shared on rare occasions. In those areas worst effected their was generally no singing and dancing, and these gatherings were often filled with the wailing and lamenting of the women. Women were called upon to say a lament for the departing person and their families, much like one might speak of the departed in a modern funeral.
In a wailing kind of speech, a woman would aquatint the listener with the personal story of the virtues of the departing person, how sadly their skills and virtues will be missed, and how terrible the grief and suffering of the parents and relatives is because of this need to say goodbye. In areas that were less poverty stricken, the American Wake included all of those elements, but was also a more festive occasion. There might be baking, cleaning, and preparation beforehand for a nice gathering. Visiting neighbors might also bring food, tea, stout, and other libations to share.
The lamentations continued, but were also sometimes mingled with dancing and singing to celebrate the life of a loved one and hope for the future. These festivities would continue late into the night, when older people would sit near the hearth and tell stories to the young seated on the floor around them. The next morning they would accompany the young emigrant to the docks for their departure. Travel by sea was risky, and known to be frought with the potentials of sickness or shipwreck. Traditionally, relatives left behind promosed to pray for their safe passage and opportunities in America.
Those departing promised to pray for a good harvest, for restoration of health and better times to their families and loved ones, and that they would keep Ireland forever in their hearts. Some found ways to communicate via letter across the expansive ocean, but with a high rate of illiteracy and the distance involved that only happened in the rarest and luckiest of cases. After a long trip by sea, the Irish found life in a new land to be difficult. Life in a new land was not easy for the Irish immigrants that made it across the ocean. Many of them, sadly, did not make it.
Already in poor health and with little money, the conditions of travel were bad. The ships were overcrowded, didn’t have enough supplys, and sickness took many of the passengers durring their three month journey. Many of the people who left Ireland never made it to the United States, in such grat numbers that the vessels carrying the immigrants became known as coffin ships. During the years of the famine boats constantly brought more refugees seeking a new home, the numbers totalling around a million within a decade. During the same period, around a million and a half died from starvation in Ireland.
Upon arrival, the new immigrants had to find places to live and work. This was challenging, because the American people were overwhelmed with the volume of very poor newcomers. The Irish imigrants were primarily farm workers who were not accustomed to or prepared for the industrialized cities they came to settle in. Many potential employers hung signs that said things like “No Irish Need Apply”, because of prejudices against the Irish people who they believed to be lazy and unskilled. The work these immigrants had in Ireland had been primarily agricultural, while the American culture was focused on industrialized production of goods.
The Irish were forced to take jobs that involved hard labor for low wages, usually in industries that were dangerous. After immigrating and setting up a meager home, many of the new immigrants died in jub related accidents while working in industries such as railroad building. This is a Verry Beautiful and Deeply Inspiring Book Anam Cara: A Book of Celtic Wisdom by: John O’Donohue Amazon Price: $8. 51 (as of 01/15/2012) The Irish People Have Made Many Great Contributions to American Society Music is one of the first things that may come to mind when you think of the impact of Irish Culture in America.
There are many beautiful Irish songs, and the Irish people culturally are known to have produced many talented musicians and songwriters. One thing the Irish seem to value culturally more than some other peoples is music, and it’s ability to carry a story in a memorable and beautiful way. For Irish immigrants, preserving and performing music from their country of origin was both a way to feel at home and to share a sense of that with others. This tendency to appreciate and cultivate musical and artistic talents also helped to pass down historical lessons and cultural perspectives from one generation to the next.
Irish music is known for rich artistic imagery, and storytelling in both serious and comic ways. This first video of ‘Oh Danny Boy’ shows some very beautiful photography of the Irish countryside. Can you imagine the newly immigrated people, remembering these places and knowing they would probably never see them again? Wanting to share these memories with their children in a new country was part of what motivated this sharing of oral history in song. The second version, chillingly beautiful and different, features Johnny Cash. . Danny Boy Ireland by tubelookjohn | video info ,010 ratings | 5,170,337 views http://www. lookaroundireland. com see the beautiful scenery of Ireland while listening to the tenor voice of Michael Londra http://www. michaellondra. com curated content from YouTube Johnny Cash and Jimmie Rodgers : Danny Boy Johnny Cash & Jimmie Rodgers – Danny Boy by Carters01 | video info 470 ratings | 564,898 views Johnny Cash & Jimmie Rodgers – Danny Boy curated content from YouTube Books About Irish History Top of Form Search Amazon. com Bottom of Form Showing 1 – 6 of 50 results A Reading Book in Irish History P. W.
Joyce (Paperback – Sep 19, 2010) $18. 88 A Reading Book in Irish History Irish history and the Irish question Goldwin Smith, Hugh J McCann (Paperback – … $21. 85 In Search of Ireland’s Heroes: The Stor… Carmel McCaffrey (Paperback – Oct 26, 200… $14. 68 Irish History and the Irish Question The History of the Great Irish Famine of… John O’rourke (Paperback – Sep 5, 2010) $39. 35 12345> Privacy Like Music, Dance is an Art form that can Share a Story with Feeling The Irish have made significant contributions to the arts not only in music, but in writing, theater, and dance.
As another art form, dance can be unique and expressive, and may tell a story and share feeling and ideas. Even now, the Irish influence on dance and storytelling can be strongly felt in America. The popularity of dance troupes that do Irish style dancing has only increased over time. Modern groups like River Dance sometimes retell moment in Irish history, such as in this clip, which shows a dance about the “American Wake” and immigration to the Americas as the potato famine effected Ireland. Riverdance American Wake by thunderceltic | video info 418 ratings | 182,462 views riverdance live from geneva urated content from YouTube Other Great Lenses about Irish History and Immigration to the U. S. The Great Irish Famine A Monument to the Great Famine IN THE shadow of Ireland’s “holy” mountain, Croagh Patrick, stands a most unusual ship. It looks like a small 19th-century sailing… Guestbook submit Reply JoshK47 Oct 17, 2011 @ 11:32 pm | delete Great work on this lens! Very good information! Reply kimmanleyort Aug 20, 2011 @ 7:54 am | delete Very thorough lens on Irish immigration and the potato famine. This is a subject near and dear to my heart as my ancestors came from Ireland to Quebec in the mid-1800’s.
Have you read the historical novel, Galway Bay? It gives a look at one family’s experience and is riveting. I even did a lens on it. Well done and blessed! Reply GetSillyProductions Apr 18, 2011 @ 1:40 pm | delete accurate history and great video of Johnny Cash. two thumbs up Reply jackiebolen Mar 21, 2011 @ 9:26 pm | delete Very informative! Well done 🙂 Reply KEELACOM Mar 20, 2011 @ 9:43 am | delete That was one of the most informative pieces I have come across on Irish Immigration to the US. I have you as a link on my lens (http://www. squidoo. com/videos_of_Ireland) in the hope others will read it.
As an Irishman living in Co Clare where the famine did untold damage, I just want to say keep up the great work. Reply LadyJasmine Mar 20, 2011 @ 2:12 pm | delete Thanks very much, I appreciate it. 🙂 Reply SquidooKimberly Mar 17, 2011 @ 11:38 pm | delete I never understood the facts but the movies always made it seem like Irish immigrants had it rougher than other cultures. Thanks for all the great history! Congrats on being on the Best of St. Patrick’s Day lenses 2011! http://www. squidoo. com/monsterboards/best_st_patricks_day_2011 Reply LadyJasmine Mar 20, 2011 @ 1:55 pm | delete Oh, wow, cool! :- Thanks Kimberly!
Reply KimGiancaterino Mar 17, 2011 @ 11:19 pm | delete My great-great grandparents were Irish immigrants and times were very tough for them. It’s hard to even read about what those people endured. Happy St. Patrick’s Day. Reply d-artist Mar 11, 2011 @ 3:30 pm | delete Very interesting lens, being an immigrant myself I understand struggles, but this is heart wrenching… I just recently saw on TV a documentary about this very subject Load More Show All Share this Guestbook Stumbleupon Facebook MySpace Twitter Digg Delicious RSS Email Donations WiserEarth is the online social forum and directory for sustainability.
We help the global movement working toward social justice and environmental stewardship collaborate, share knowledge, and build alliances. Our tools and resources are all free to u We at Squidoo passionately believe in creating new ways to support good causes online. By making a donation to Wiser Earth from this page, you are sending money directly to that organization, in whatever amount you want. We don’t touch it. We don’t even see it. The author of this page doesn’t either. And if you made it this far, thanks for caring. Top of Form Choose an amount: Donate Cancel Bottom of Form
Special Thanks to my room mate Jean Marie Carrier for co-authoring this page, and allowing me to include excerpts from a college research paper she did this semester for a history class. Ads by Google NC Immigration Lawyer Contact An Experienced Lawyer for Business, Family ; Individual Visas www. edgertonimmigration. com K-12 Homework Help Tutors ; Learning Centers Near You. Contact K-12 Homework Help Programs Tutor. Schools. com Like 39 RSS by LadyJasmine LadyJasmine I play piano,guitar, write music and sing. I am also a Wiccan, a tarot reader, a freelance writer, a student, a teacher, a traveler, and a wandering s… ore » 65 featured lenses Winner of 23 trophies! Top lens » The Best and Most Beautiful Tarot Decks Feeling creative? Create a Lens! Explore related pages The Importance of Integrity The Importance of Integrity Top Research Paper Websites for Kids Top Research Paper Websites for Kids The Great Irish Famine The Great Irish Famine Irish Slaves in the Caribbean Irish Slaves in the Caribbean Unique Gift Ideas for Teachers 2012 Unique Gift Ideas for Teachers 2012 Irish Fiddle: Traditional Irish Instrumental Music Irish Fiddle: Traditional Irish Instrumental Music Beautiful Claddagh Ring
LWRS043-7 Nickel Free Sterling Silver Irish Claddagh Friendship and Love Band Polished Finished Ring Size 7 Amazon Price: $16. 99 (as of 01/15/2012) Saphire Claddagh Ring Sterling Silver Blue Sapphire Heart CZ Claddagh Ring Sizes 4 to 9, 7 Amazon Price: $34. 99 (as of 01/15/2012) Anam Cara by John O’Donohue Anam Cara: A Book of Celtic Wisdom by: John O’Donohue Amazon Price: $8. 51 (as of 01/15/2012) Related Tags Irish History history college college writing cultural anthropology emmigration history immigration immigration history rish irish dancing irish immigrants irish immigration to the united states irish music irish people melting pot potato famine reaserch writing research paper scholarly research LadyJasmine more… TOS Originality Pact About Us SquidooHQ Charity Report Abuse Feedback ; Bugs Copyright © 2012, Squidoo, LLC and respective copyright owners This page and all the pages on Squidoo generate income for lensmasters and charities based on affiliate relationships with our partners, including Amazon, Google, eBay and others.
Have fun. Want our Newsletter? Top of Form Sign up! Bottom of Form Sorry, lens owners cannot vote for their own lenses. close Top of Form Visitor Squidoo Twitter Name: Login (email address): Password: Posting comment as Twitter user Tweet this comment! Posting comment as Facebook user (disconnect) Post this comment to Facebook! Notify me by email when new comments are added. Email: CancelSubmit Bottom of Form John O’rourke (Paperback – Sep 5, 2010) John O’rourke (Paperback – Sep 5, 2010) back – Sep 5, 2010)
|
EMOTIONAL INVOLVEMENT NAME ACTIVITY OBJECTIVE
Role-Play Getting emotionally involved
SUBJECT/FIELD Any OF STUDY Language lessons. AGE GROUP OR LEVEL PROCEDURE - Materials needed: cards with different roles - Students get in pairs. A has a role and B has a different role. - Ss play this role for 5 minutes making the effort to use the right structures and vocabulary. - Teacher stops the roleplay and gives them expressions and vocab that might be useful to perform the role-play, -
Ss get in different pairs and now change the role.
They play out the conversation in class.
Ex. Housemates discussion, parent-children discussion, etc REFERENCE
How do I see myself? How do the others see me?
Emotionally involve students
SUBJECT/FIELD OF STUDY Language AGE GROUP OR LEVEL PROCEDURE
Teenagers and adults Materials needed: a list of character adjectives 1. Put Sts in pairs but ask them to work on their own for the first part of the activity: Sts choose 8 adjectives that best describe them and 8 adejctives that they think are especially applicable to their partner. Ask them to be honest 2. Pairs get together to comment and discuss their choices: Student A talks about his own characteristics first and gives examples or reasons for his choice. Then B discusses his choices for A, telling him why he chose the adjectives. Afterwards B starts and then A presents his view of him, again justifying his choices.
See Appendix 1
Excellent 5/5 Students feel proud of themselves as partners usually describe them in a positive way. Occasionally funny comments are made and are most welcome. Students get closer to each other as they realise their qualities are appreciated amonst their classmates. Mary Kritikou, Greece
Emotionally involve students in the subject
SUBJECT/FIELD OF STUDY Language, geography, social studies AGE GROUP OR LEVEL PROCEDURE
Teenagers and adults 1. Put students in groups of 3/4 and ask them to make a list of positive and negative qualities about their nationality. 2. Students of different groups get together, they compare and discuss the choices of their group and try to make a unique list. 3. Feedback and short class discussion class
Follow up. If students are interested in national stereotypes you can now continue the activity by asking students how they see American people: 1. Ask students to tell you positive and negative qualities about American people and write them on board on two different lists 2. Listen to different British people talking about Americans. Sts tick the adjectives from their lists and write any new ones they hear
For this activity you need the video Americans. See Appendix 2 REFERENCE
NAME ACTIVITY The 10 stress busters OBJECTIVE
Emotionally involve students in the subject
SUBJECT/FIELD Language, social studies... OF STUDY All AGE GROUP OR LEVEL PROCEDURE Materials needed. Power point slides. See text in appendix 3 1. Ask the class what stress is, show slide 1 to introduce the topic 2. Ask the class, "What are stresses students face?" Write their responses on the board so they can see the list of stressors and recognize them in their own lives. (5 minutes) 3. Put students in groups of 3-4 and ask them to make a list of the physical and psychological effects of stress they can think of. 4.
Show slides 2 and 3. Sts compare with their lists
5. Now, in their groups students think of healthy stress reduction techniques to overcome stress. Ask them to write their ideas in two lists: new thoughts and new behaviours. Feed back and write students' ideas on the board. 6. Put the power point slides num. 4 and 5 and ask if they see any ideas not on their list. Clarify any they may not understand. Invite them to add any to their list. 7. Hand out a 3x5 card to the students and ask them to write their personal top 10 Stress Busters. 8. Ask students to decide on one place they will post their top ten-list/menu card as a reminder. Have the class brainstorm all the places they could post their top ten list. Suggest that they look at the card during the week and report back on the impact the card had. 9. After a week, offer an evaluation to follow up on the impact of the exercise. You may ask use slide num. 6 to comment on the activity.
NAME ACTIVITY Me too
Emotionally involve students in the subject. Group building
SUBJECT/FIELD Language OF STUDY All AGE GROUP OR LEVEL PROCEDURE
1. It works best for small groups sitting together as a team (4-6 learners) 10 cards, pieces of paper‌
2. The first student states something he/she has done which he thinks no else who has done the same thing puts one card in the middle of the table.
3. Then the second person states something. Everyone who has done it p centre. Continue until someone has run out of cards.
NAME ACTIVITY I’VE DONE SOMETHING YOU HAVEN’T
Emotionally involve students in the subject. Group Building
SUBJECT/FIELD Language OF STUDY All AGE GROUP OR LEVEL PROCEDURE Similar to the previous activity “Me Too”. It can be used on the 1st day of class f themselves
1. Have each person to introduce themselves and then state something think no one else in the class has done. If someone else has also done it, the something else until he/she finds something that no one else has done.
DAVID’S SPOOKY VERB MAZE OF TERROR
Getting students emotionally involved
SUBJECT/FIELD OF STUDY
AGE GROUP OR LEVEL PROCEDURE
All the students have a photocopy of a maze with numbers like the one in the example. Instructions: Start at the box marked IN. If the sentence/theorem/concepts/facts/anything you want to work on is correct/true, follow the white arrow to the next circle. If it is incorrect, follow the black arrow. To get out of the maze, you must pass through all 14 circles. In the example, each number has a sentence associated and the students have to decide whether the sentence is correct or it is incorrect and then follow the appropriate arrows.
British Council conference “Making Grammar Fun” by David Gatrell
My parents have met in Cambridge in 1972.
1. I went to Cologne in 2004 because I was flying to Barcelona from there. 2. My friend's having his 30th birthday party in a bar beca'use he's only having a small flat. 3. I saw Happy Go Lucky last weekend, one of the best films Mike Leigh's ever made. 4. I've just started reading Kafka On The Shore by Haruki Murakami. 5. I've been to Germany several times, but I've never spent more than a few hours in Cologne. 6. I wasn't understanding the difference between Spanish and Catalan when I first moved here. 7. It's by far the nicest flat I've ever lived in much better than where I used to live. 8. Since moving here, I've been giving around a dozen training sessions to Catalan teachers. OK 9. Nothing I'd ever seen had prepared me for teaching Spanish children! 10. I've had my hair cut a few days ago, and I think it looks pretty good. What do you think? 11. I'd spent the summer with a friend in Maastricht, you see, and Cologne was the nearest airport. 12. My friend Rob works on a play in Cologne at the moment and I'm going to visit him there. 13. I am living in the same flat for over a year now, and I love it. 14.
I use to enjoy doing maths puzzles when I was a
Tbe United Kingdom's Im.NltionIII org.n .••• lon for educational opportunities and cultur.1 r.LMionI. We are registered In England _ a Charity.
IN- Incorrect 14- Incorrect 6- Incorrect 8- Incorrect 9- Correct 4- Correct 13- Incorrect 7- Correct 2- Incorrect 10- Incorrect 3- Correct 12- Incorrect 5- correct 1- Correct 11- Correct OUT
Getting students emotionally involved
SUBJECT/FIELD OF STUDY Any AGE GROUP OR LEVEL PROCEDURE
Time: 5-30 minutes Instructions: Divide the students into groups. Give each group a different coloured sheet of card with a 5x4 grid cut up into cards. Tell them to shuffle the cards, turn them face down and arrange them in a grid. Students take turns to turn over two cards, trying to find matching pairs. The winning student in each group is the one with the most matching pairs. Examples: in the case of English this activity can be used to revise on comparative-superlative, infinitive-past simple-past participle, make-do, adj-nouns, phrasal verbs-pictures/definitions/translations, etc.
APPENDIX TO ACTIVITIES Appendix 1 CHARACTER AND PERSONALITY How do I see myself?
How do others see me?
A lot of people have only few possibilities of getting feedback about their own person. In this exercise you will have the opportunity to get some feedback and to discuss it with a partner. Try to be honest!
1. Self assessment Of the following characteristics choose 8 that are particularly applicable to you personally. ADVENTUROUS
2. Parner Assessment Now choose 8 characteristic features which you think are especially applicable to your partner.
Appendix 2. Video: The Americans
Appendix 3. Text to be included in Power Point slides: SLIDE 1--WHAT IS STRESS? American Medical Association Definition: "Any interference that disturbs a person's mental or physical well-being." Physiology - The release of chemicals called cortisol and epinephrine (adrenaline) increases heart rate, metabolism, breathing, muscle tension, and blood pressure. (Fight or Flight) Releases 1,400 chemical reactions in your body, some continuing for hours after the stressor that caused it has passed. SLIDE 2--PHYSIOLOGICAL EFFECTS OF STRESS: Too much stress inhibits digestion, growth, tissue repair, and response of your immune system and inflammatory systems. Studies show that: • • •
people with high stress are twice as likely to develop colds are those with low-stress. 70-80% of doctor visits are stress-related illnesses. Ex: high blood pressure, headaches, backaches, indigestion, ulcers, diarrhoea, fatigue, insomnia, physical weakness
SLIDE 3--EMOTIONAL EFFECTS OF STRESS • • • • • • • •
Anger Hostility Irritability Anxiety Sadness Depression Powerlessness Total overwhelm
SLIDE 4--HEALTHY REDUCTION TECHNIQUES - CHOOSE NEW BEHAVIORS • • • • •
Separate from an external stressor Resolve incompletes - Take care of it now! Keep your finances organized Delegate Say "no" - Understand your boundaries
• • • • • • • •
Exercise Relax Breathe deeply Get a massage Do something (anything!) towards your goals. Exercise Listen to uplifting music. Laugh
SLIDE 5--HEALTHY REDUCTION TECHNIQUES - CHOOSE NEW THOUGHTS • • • • • • • • • • • • • •
Visualize problems and troubles shrinking to a manageable size Take a mental vacation Challenge pessimistic beliefs Focus on the positive Find the opportunity in the problem Elevate - Will this matter one year from now? Trust a positive outcome Detach Reframe Visualize success with safety Assume the best Face the fear Identify your hurt Forgive
SLIDE 6--EVALUATION • • • • •
What did you learn/get out of this activity? What did you like about this activity? Dislike? How would you improve this activity? Where did you post your card? Did you look at it?
What impact, if any, did it have?
See Appendix 1
See Appendix 3
|
See also: Active galactic nucleus.
A quasar (also known as a quasi-stellar object abbreviated QSO) is an extremely luminous active galactic nucleus (AGN), in which a supermassive black hole with mass ranging from millions to billions of times the mass of the Sun is surrounded by a gaseous accretion disk. As gas in the disk falls towards the black hole, energy is released in the form of electromagnetic radiation, which can be observed across the electromagnetic spectrum. The power radiated by quasars is enormous: the most powerful quasars have luminosities thousands of times greater than a galaxy such as the Milky Way.
The term originated as a contraction of quasi-stellar [star-like] radio source, because quasars were first identified during the 1950s as sources of radio-wave emission of unknown physical origin, and when identified in photographic images at visible wavelengths they resembled faint star-like points of light. High-resolution images of quasars, particularly from the Hubble Space Telescope, have demonstrated that quasars occur in the centers of galaxies, and that some host-galaxies are strongly interacting or merging galaxies. As with other categories of AGN, the observed properties of a quasar depend on many factors including the mass of the black hole, the rate of gas accretion, the orientation of the accretion disk relative to the observer, the presence or absence of a jet, and the degree of obscuration by gas and dust within the host galaxy.
Quasars are found over a very broad range of distances, and quasar discovery surveys have demonstrated that quasar activity was more common in the distant past. The peak epoch of quasar activity was approximately 10 billion years ago., the most distant known quasar is ULAS J1342+0928 at redshift z = 7.54; light observed from this quasar was emitted when the universe was only 690 million years old. The supermassive black hole in this quasar, estimated at 800 million solar masses, is the most distant black hole identified to date.
Between 1917 and 1922, it became clear from work by Heber Curtis, Ernst Öpik and others, that some objects ("nebulae") seen by astronomers were in fact distant galaxies like our own. But when radio astronomy commenced in the 1950s, astronomers detected, among the galaxies, a small number of anomalous objects with properties that defied explanation.
The objects emitted large amounts of radiation of many frequencies, but no source could be located optically, or in some cases only a faint and point-like object somewhat like a distant star. The spectral lines of these objects, which identify the chemical elements of which the object is composed, were also extremely strange and defied explanation. Some of them changed their luminosity very rapidly in the optical range and even more rapidly in the X-ray range, suggesting an upper limit on their size, perhaps no larger than our own Solar System. This implies an extremely high power density. Considerable discussion took place over what these objects might be. They were described as "quasi-stellar [meaning: star-like] radio sources", or "quasi-stellar objects" (QSOs), a name which reflected their unknown nature, and this became shortened to "quasar".
The first quasars (3C 48 and 3C 273) were discovered in the late 1950s, as radio sources in all-sky radio surveys. They were first noted as radio sources with no corresponding visible object. Using small telescopes and the Lovell Telescope as an interferometer, they were shown to have a very small angular size. Hundreds of these objects were recorded by 1960 and published in the Third Cambridge Catalogue as astronomers scanned the skies for their optical counterparts. In 1963, a definite identification of the radio source 3C 48 with an optical object was published by Allan Sandage and Thomas A. Matthews. Astronomers had detected what appeared to be a faint blue star at the location of the radio source and obtained its spectrum, which contained many unknown broad emission lines. The anomalous spectrum defied interpretation.
British-Australian astronomer John Bolton made many early observations of quasars, including a breakthrough in 1962. Another radio source, 3C 273, was predicted to undergo five occultations by the Moon. Measurements taken by Cyril Hazard and John Bolton during one of the occultations using the Parkes Radio Telescope allowed Maarten Schmidt to find a visible counterpart to the radio source and obtain an optical spectrum using the 200-inch Hale Telescope on Mount Palomar. This spectrum revealed the same strange emission lines. Schmidt was able to demonstrate that these were likely to be the ordinary spectral lines of hydrogen redshifted by 15.8 percent—at the time, a high redshift (with only a handful of much fainter galaxies known with higher redshift). If this was due to the physical motion of the "star", then 3C 273 was receding at an enormous velocity, around 47,000 km/s, far beyond the speed of any known star and defying any obvious explanation. Nor would an extreme velocity help to explain 3C 273's huge radio emissions. If the redshift was cosmological (now known to be correct), the large distance implied that 3C 273 was far more luminous than any galaxy, but much more compact. Also, 3C 273 was bright enough to detect on archival photographs dating back to the 1900s; it was found to be variable on yearly timescales, implying that a substantial fraction of the light was emitted from a region less than 1 light-year in size, tiny compared to a galaxy.
Although it raised many questions, Schmidt's discovery quickly revolutionized quasar observation. The strange spectrum of 3C 48 was quickly identified by Schmidt, Greenstein and Oke as hydrogen and magnesium redshifted by 37%. Shortly afterwards, two more quasar spectra in 1964 and five more in 1965 were also confirmed as ordinary light that had been redshifted to an extreme degree.
Although the observations and redshifts themselves were not doubted, their correct interpretation was heavily debated, and Bolton's suggestion that the radiation detected from quasars were ordinary spectral lines from distant highly redshifted sources with extreme velocity was not widely accepted at the time.
See main article: Redshift, Metric expansion of space and Universe. An extreme redshift could imply great distance and velocity but could also be due to extreme mass or perhaps some other unknown laws of nature. Extreme velocity and distance would also imply immense power output, which lacked explanation, and conflicted with the traditional and predominant steady state theory of the universe. The small sizes were confirmed by interferometry and by observing the speed with which the quasar as a whole varied in output, and by their inability to be seen in even the most powerful visible light telescopes as anything more than faint starlike points of light. But if they were small and far away in space, their power output would have to be immense, and difficult to explain. Equally if they were very small and much closer to our galaxy, it would be easy to explain their apparent power output, but less easy to explain their redshifts and lack of detectable movement against the background of the universe.
Schmidt noted that redshift is also associated with the expansion of the universe, as codified in Hubble's law. If the measured redshift was due to expansion, then this would support an interpretation of very distant objects with extraordinarily high luminosity and power output, far beyond any object seen to date. This extreme luminosity would also explain the large radio signal. Schmidt concluded that 3C 273 could either be an individual star around 10 km wide within (or near to) our galaxy, or a distant active galactic nucleus. He stated that a distant and extremely powerful object seemed more likely to be correct.
Schmidt's explanation for the high redshift was not widely accepted at the time. A major concern was the enormous amount of energy these objects would have to be radiating, if they were distant. In the 1960s no commonly-accepted mechanism could account for this. The currently accepted explanation, that it is due to matter in an accretion disc falling into a supermassive black hole, was only suggested in 1964 by Edwin Salpeter and Yakov Zel'dovich, and even then it was rejected by many astronomers, because in the 1960s, the existence of black holes was still widely seen as theoretical and too exotic, and because it was not yet confirmed that many galaxies (including our own) have supermassive black holes at their center. The strange spectral lines in their radiation, and the speed of change seen in some quasars, also suggested to many astronomers and cosmologists that the objects were comparatively small and therefore perhaps bright, massive and not far away; accordingly that their redshifts were not due to distance or velocity, and must be due to some other reason or an unknown process, meaning that the quasars were not really powerful objects nor at extreme distances, as their redshifted light implied. A common alternative explanation was that the redshifts were caused by extreme mass (gravitational redshifting explained by general relativity) and not by extreme velocity (explained by special relativity).
Various explanations were proposed during the 1960s and 1970s, each with their own problems. It was suggested that quasars were nearby objects, and that their redshift was not due to the expansion of space (special relativity) but rather to light escaping a deep gravitational well (general relativity). This would require a massive object, which would also explain the high luminosities. However a star of sufficient mass to produce the measured redshift would be unstable and in excess of the Hayashi limit. Quasars also show forbidden spectral emission lines which were previously only seen in hot gaseous nebulae of low density, which would be too diffuse to both generate the observed power and fit within a deep gravitational well. There were also serious concerns regarding the idea of cosmologically distant quasars. One strong argument against them was that they implied energies that were far in excess of known energy conversion processes, including nuclear fusion. There were some suggestions that quasars were made of some hitherto unknown form of stable antimatter regions and that this might account for their brightness. Others speculated that quasars were a white hole end of a wormhole, or a chain reaction of numerous supernovae.
Eventually, starting from about the 1970s, many lines of evidence (including the first X-Ray space observatories, knowledge of black holes and modern models of cosmology) gradually demonstrated that the quasar redshifts are genuine, and due to the expansion of space, that quasars are in fact as powerful and as distant as Schmidt and some other astronomers had suggested, and that their energy source is matter from an accretion disc falling onto a supermassive black hole. This included crucial evidence from optical and X-Ray viewing of quasar host galaxies, finding of 'intervening' absorption lines which explained various spectral anomalies, observations from gravitational lensing, Peterson and Gunn's 1971 finding that galaxies containing quasars showed the same redshift as the quasars, and Kristian's 1973 finding that the "fuzzy" surrounding of many quasars was consistent with a less luminous host galaxy.
This model also fits well with other observations that suggest many or even most galaxies have a massive central black hole. It would also explain why quasars are more common in the early universe: as a quasar draws matter from its accretion disc, there comes a point when there is less matter nearby, and energy production falls off or ceases as the quasar becomes a more ordinary type of galaxy.
The accretion disc energy-production mechanism was finally modeled in the 1970s, and black holes were also directly detected (including evidence showing that supermassive black holes could be found at the centers of our own and many other galaxies), which resolved the concern that quasars were too luminous to be a result of very distant objects or that a suitable mechanism could not be confirmed to exist in nature. By 1987 it was "well accepted" that this was the correct explanation for quasars, and the cosmological distance and energy output of quasars was accepted by almost all researchers.
Later it was found that not all quasars have strong radio emission; in fact only about 10% are "radio-loud". Hence the name 'QSO' (quasi-stellar object) is used (in addition to "quasar") to refer to these objects, including the 'radio-loud' and the 'radio-quiet' classes. The discovery of the quasar had large implications for the field of astronomy in the 1960s, including drawing physics and astronomy closer together.
Quasars inhabit the center of active galaxies, and are among the most luminous, powerful, and energetic objects known in the universe, emitting up to a thousand times the energy output of the Milky Way, which contains 200 billion–400 billion stars. This radiation is emitted across the electromagnetic spectrum, almost uniformly, from X-rays to the far-infrared with a peak in the ultraviolet-optical bands, with some quasars also being strong sources of radio emission and of gamma-rays. With high-resolution imaging from ground-based telescopes and the Hubble Space Telescope, the "host galaxies" surrounding the quasars have been detected in some cases. These galaxies are normally too dim to be seen against the glare of the quasar, except with special techniques. Most quasars, with the exception of 3C 273 whose average apparent magnitude is 12.9, cannot be seen with small telescopes.
Quasars are believed—and in many cases confirmed—to be powered by accretion of material into supermassive black holes in the nuclei of distant galaxies, as suggested in 1964 by Edwin Salpeter and Yakov Zel'dovich. Light and other radiation cannot escape from within the event horizon of a black hole. The energy produced by a quasar is generated outside the black hole, by gravitational stresses and immense friction within the material nearest to the black hole, as it orbits and falls inward. The huge luminosity of quasars results from the accretion discs of central supermassive black holes, which can convert between 6% and 32% of the mass of an object into energy, compared to just 0.7% for the p-p chain nuclear fusion process that dominates the energy production in Sun-like stars. Central masses of 105 to 109 solar masses have been measured in quasars by using reverberation mapping. Several dozen nearby large galaxies, including our own Milky Way galaxy, that do not have an active center and do not show any activity similar to a quasar, are confirmed to contain a similar supermassive black hole in their nuclei (galactic center). Thus it is now thought that all large galaxies have a black hole of this kind, but only a small fraction have sufficient matter in the right kind of orbit at their center to become active and power radiation in such a way as to be seen as quasars.
This also explains why quasars were more common in the early universe, as this energy production ends when the supermassive black hole consumes all of the gas and dust near it. This means that it is possible that most galaxies, including the Milky Way, have gone through an active stage, appearing as a quasar or some other class of active galaxy that depended on the black hole mass and the accretion rate, and are now quiescent because they lack a supply of matter to feed into their central black holes to generate radiation.
The matter accreting onto the black hole is unlikely to fall directly in, but will have some angular momentum around the black hole that will cause the matter to collect into an accretion disc. Quasars may also be ignited or re-ignited when normal galaxies merge and the black hole is infused with a fresh source of matter. In fact, it has been suggested that a quasar could form when the Andromeda Galaxy collides with our own Milky Way galaxy in approximately 3–5 billion years.
In the 1980s, unified models were developed in which quasars were classified as a particular kind of active galaxy, and a consensus emerged that in many cases it is simply the viewing angle that distinguishes them from other active galaxies, such as blazars and radio galaxies.
The highest redshift quasar known is ULAS J1342+0928, with a redshift of 7.54, which corresponds to a comoving distance of approximately 29.36 billion light-years from Earth (these distances are much larger than the distance light could travel in the universe's 13.8 billion year history because space itself has also been expanding).
More than 200,000 quasars are known, most from the Sloan Digital Sky Survey. All observed quasar spectra have redshifts between 0.056 and 7.54 (as of 2017). Applying Hubble's law to these redshifts, it can be shown that they are between 600 million and 29.36 billion light-years away (in terms of comoving distance). Because of the great distances to the farthest quasars and the finite velocity of light, they and their surrounding space appear as they existed in the very early universe.
The power of quasars originates from supermassive black holes that are believed to exist at the core of most galaxies. The Doppler shifts of stars near the cores of galaxies indicate that they are rotating around tremendous masses with very steep gravity gradients, suggesting black holes.
Although quasars appear faint when viewed from Earth, they are visible from extreme distances, being the most luminous objects in the known universe. The brightest quasar in the sky is 3C 273 in the constellation of Virgo. It has an average apparent magnitude of 12.8 (bright enough to be seen through a medium-size amateur telescope), but it has an absolute magnitude of −26.7. From a distance of about 33 light-years, this object would shine in the sky about as brightly as our sun. This quasar's luminosity is, therefore, about 4 trillion (4 × 1012) times that of the Sun, or about 100 times that of the total light of giant galaxies like the Milky Way. This assumes the quasar is radiating energy in all directions, but the active galactic nucleus is believed to be radiating preferentially in the direction of its jet. In a universe containing hundreds of billions of galaxies, most of which had active nuclei billions of years ago but only seen today, it is statistically certain that thousands of energy jets should be pointed toward the Earth, some more directly than others. In many cases it is likely that the brighter the quasar, the more directly its jet is aimed at the Earth. Such quasars are called blazars.
The hyperluminous quasar APM 08279+5255 was, when discovered in 1998, given an absolute magnitude of −32.2. High resolution imaging with the Hubble Space Telescope and the 10 m Keck Telescope revealed that this system is gravitationally lensed. A study of the gravitational lensing of this system suggests that the light emitted has been magnified by a factor of ~10. It is still substantially more luminous than nearby quasars such as 3C 273.
Quasars were much more common in the early universe than they are today. This discovery by Maarten Schmidt in 1967 was early strong evidence against Steady State cosmology and in favor of the Big Bang cosmology. Quasars show the locations where massive black holes are growing rapidly (via accretion). These black holes grow in step with the mass of stars in their host galaxy in a way not understood at present. One idea is that jets, radiation and winds created by the quasars, shut down the formation of new stars in the host galaxy, a process called 'feedback'. The jets that produce strong radio emission in some quasars at the centers of clusters of galaxies are known to have enough power to prevent the hot gas in those clusters from cooling and falling onto the central galaxy.
Quasars' luminosities are variable, with time scales that range from months to hours. This means that quasars generate and emit their energy from a very small region, since each part of the quasar would have to be in contact with other parts on such a time scale as to allow the coordination of the luminosity variations. This would mean that a quasar varying on a time scale of a few weeks cannot be larger than a few light-weeks across. The emission of large amounts of power from a small region requires a power source far more efficient than the nuclear fusion that powers stars. The conversion of gravitational potential energy to radiation by infalling to a black hole converts between 6% and 32% of the mass to energy, compared to 0.7% for the conversion of mass to energy in a star like our sun. It is the only process known that can produce such high power over a very long term. (Stellar explosions such as supernovas and gamma-ray bursts, and direct matter-antimatter annihilation, can also produce very high power output, but supernovae only last for days, and the universe does not appear to have had large amounts of antimatter at the relevant times).
Since quasars exhibit all the properties common to other active galaxies such as Seyfert galaxies, the emission from quasars can be readily compared to those of smaller active galaxies powered by smaller supermassive black holes. To create a luminosity of 1040 watts (the typical brightness of a quasar), a super-massive black hole would have to consume the material equivalent of 10 stars per year. The brightest known quasars devour 1000 solar masses of material every year. The largest known is estimated to consume matter equivalent to 10 Earths per second. Quasar luminosities can vary considerably over time, depending on their surroundings. Since it is difficult to fuel quasars for many billions of years, after a quasar finishes accreting the surrounding gas and dust, it becomes an ordinary galaxy.
Radiation from quasars is partially 'nonthermal' (i.e., not due to black body radiation), and approximately 10 percent are observed to also have jets and lobes like those of radio galaxies that also carry significant (but poorly understood) amounts of energy in the form of particles moving at relativistic speeds. Extremely high energies might be explained by several mechanisms (see Fermi acceleration and Centrifugal mechanism of acceleration). Quasars can be detected over the entire observable electromagnetic spectrum including radio, infrared, visible light, ultraviolet, X-ray and even gamma rays. Most quasars are brightest in their rest-frame near-ultraviolet wavelength of 121.6 nm Lyman-alpha emission line of hydrogen, but due to the tremendous redshifts of these sources, that peak luminosity has been observed as far to the red as 900.0 nm, in the near infrared. A minority of quasars show strong radio emission, which is generated by jets of matter moving close to the speed of light. When viewed downward, these appear as blazars and often have regions that seem to move away from the center faster than the speed of light (superluminal expansion). This is an optical illusion due to the properties of special relativity.
Quasar redshifts are measured from the strong spectral lines that dominate their visible and ultraviolet emission spectra. These lines are brighter than the continuous spectrum. They exhibit Doppler broadening corresponding to mean speed of several percent of the speed of light. Fast motions strongly indicate a large mass. Emission lines of hydrogen (mainly of the Lyman series and Balmer series), helium, carbon, magnesium, iron and oxygen are the brightest lines. The atoms emitting these lines range from neutral to highly ionized, leaving it highly charged. This wide range of ionization shows that the gas is highly irradiated by the quasar, not merely hot, and not by stars, which cannot produce such a wide range of ionization.
Like all (unobscured) active galaxies, quasars can be strong X-ray sources. Radio-loud quasars can also produce X-rays and gamma rays by inverse Compton scattering of lower-energy photons by the radio-emitting electrons in the jet.
Iron quasars show strong emission lines resulting from low ionization iron (FeII), such as IRAS 18508-7815.
Quasars also provide some clues as to the end of the Big Bang's reionization. The oldest known quasars (redshift = 6) display a Gunn-Peterson trough and have absorption regions in front of them indicating that the intergalactic medium at that time was neutral gas. More recent quasars show no absorption region but rather their spectra contain a spiky area known as the Lyman-alpha forest; this indicates that the intergalactic medium has undergone reionization into plasma, and that neutral gas exists only in small clouds.
The intense production of ionizing ultraviolet radiation is also significant, as it would provide a mechanism for reionization to occur as galaxies form. Despite this, current theories suggest that quasars were not the primary source of reionization; the primary causes of reionization were probably the earliest generations of stars, known as Population III stars (possibly 70%), and dwarf galaxies (very early small high-energy galaxies) (possibly 30%).
Quasars show evidence of elements heavier than helium, indicating that galaxies underwent a massive phase of star formation, creating population III stars between the time of the Big Bang and the first observed quasars. Light from these stars may have been observed in 2005 using NASA's Spitzer Space Telescope, although this observation remains to be confirmed.
The taxonomy of quasars includes various subtypes representing subsets of the quasar population having distinct properties.
Because quasars are extremely distant, bright, and small in apparent size, they are useful reference points in establishing a measurement grid on the sky. The International Celestial Reference System (ICRS) is based on hundreds of extra-galactic radio sources, mostly quasars, distributed around the entire sky. Because they are so distant, they are apparently stationary to our current technology, yet their positions can be measured with the utmost accuracy by very-long-baseline interferometry (VLBI). The positions of most are known to 0.001 arcsecond or better, which is orders of magnitude more precise than the best optical measurements.
A grouping of two or more quasars on the sky can result from a chance alignment where the quasars are not physically associated, actual physical proximity, or the effects of gravity bending the light of a single quasar into two or more images via gravitational lensing.
When two quasars appear to be very close to each other as seen from Earth (separated by a few arcseconds or less), they are commonly referred to as a "double quasar". When the two are also close together in space (i.e. observed to have similar redshifts), they are termed a "quasar pair", or as a "binary quasar" if they are close enough that their host galaxies are likely to be physically interacting.
As quasars are overall rare objects in the universe, the probability of three or more separate quasars being found near the same physical location is very low, and determining whether the system is closely separated physically requires significant observational effort. The first true triple quasar was found in 2007 by observations at the W. M. Keck Observatory Mauna Kea, Hawaii. LBQS 1429-008 (or QQQ J1432-0106) was first observed in 1989 and at the time was found to be a double quasar. When astronomers discovered the third member, they confirmed that the sources were separate and not the result of gravitational lensing. This triple quasar has a redshift of z = 2.076. The components are separated by an estimated 30–50 kpc, which is typical of interacting galaxies. In 2013, the second true triplet of quasars--QQQ J1519+0627--was found with a redshift z = 1.51, the whole system fitting within a physical separation of 25kpc. The first true quadruple quasar system was discovered in 2015 at a redshift z=2.0412, and has an overall physical scale of about 200kpc.
A multiple-image quasar is a quasar whose light undergoes gravitational lensing, resulting in double, triple or quadruple images of the same quasar. The first such gravitational lens to be discovered was the double-imaged quasar Q0957+561 (or Twin Quasar) in 1979. An example of a triply lensed quasar is PG1115 +08. Several quadruple-image quasars are known, including the Einstein Cross and the Cloverleaf Quasar, with the first such discoveries happening in the mid-1980s.
|
In an experiment modeled on the classic “Young’s double slit experiment” and published in the journal Nature Nanotechnology, researchers have powerfully reinforced the understanding that surface plasmon polaritons (SPPs) move as waves and follow analogous rules. The demonstration reminds researchers and electronics designers that although SPPs move along a metal surface, rather than inside a wire or an optical fiber, they cannot magically overcome the size limitations of conventional optics.
Touted as the next wave of electronics miniaturization, plasmonics describes the movement of SPPs -- a type of electromagnetic wave that is bound to a metal surface by its interaction with surface electrons. The emerging technology could provide a bridge between nanoscale electronics and photonics.
Conventional electronic devices, in which metal wires carry electrical signals, can be manufactured at the nanoscale but incur long time delays. Photonic – or fiber optic – devices transmit a signal at the speed of light but cannot be miniaturized below a size limit imposed by the wavelength of light that they carry.
Plasmonic devices seem to combine the best of both technologies. Because SPPs are electromagnetic waves they move at near light-speed, but because they ride the surface of wires, it seemed they might circumvent the diffraction limit, which restricts the size of fiber optics.
“We know that these are still essentially electromagnetic waves and therefore they must still obey a diffraction limit,” says Rashid Zia, assistant professor of engineering at Brown University. “The key is to define a set of solutions in a way that is analogous to other systems so that we can derive that limit.”
Zia and Mark Brongersma, an assistant professor of engineering at Stanford University, set out to find an experiment that could test the limits of plasmonic technology and also shed light on the principles that control this still-mysterious kind of wave.
Young’s double slit experiment is usually performed as a demonstration of optical diffraction, although recent variations have also been used to test the quantum behavior of electrons, atoms and even molecules.
In the classic double slit experiment, students shine a light onto a screen through an opaque barrier with two slits in it. When one slit is covered, the pattern of light is brightest directly in front of the slit. When light passes through both slits, a series of light and dark lines appear instead. The light forms a bright line between the slits, where the peaks of the waves reinforce one another and a distinct pattern of darker lines where the peaks and valleys cancel each other out. It’s an elegant demonstration of the wave side of light’s dual nature.
In their experiment, Zia and Brongersma generated an SPP and passed it across two narrow bridges of gold film on a glass slide. As the waves exited onto a broad sheet of gold film, they diffracted to create interference patterns analogous to those seen in Young’s double slit experiment. Using a simple analytical model for the way SPPs are guided along individual metal stripes, the researchers predicted the pattern of diffraction they would see.
Because SPPs are not in the spectrum of visible light, they don’t just show up on a screen. Zia and Brongersma precisely measured the diffraction pattern using a photon scanning tunneling microscope. The pattern they saw closely matched what they predicted using their proposed framework, which is based on an analogy to conventional optics.
The results of this experiment may disappoint some researchers who have hoped that SPPs traveling along metal waveguides could allow circuit design to move seamlessly from electronics to photonics. Instead, Zia sees developing -- and challenging -- a comprehensive theory as the first step toward devising structures uniquely suited to controlling the movement of SPPs.
“You can couple stripes, you can make slits, you can make all sorts of other geometries that might work,” said Zia. “But to see that potential through, you have to have a clear analytical theory and a way to test it.”
Source: Brown University
Explore further: Stacking 2-D materials produces surprising results
|
A quick overview on Microcontroller architectures, Wendy Ju & Michael Gurevich
A microcontroller is essentially a small computer on a chip. Like any computer, it has memory, and can be programmed to do calculations, receive input, and generate output. Unlike a PC, it incorporates memory, a CPU, peripherals and I/O interfaces into a single chip. Microcontrollers are ubiquitous, found in almost every electronic device from ovens to iPods.
The AVR has a clock that “ticks” at a regular rate. A variety of clocks types and speeds are available, including a built-in circuit or external crystals. We use a 14.746MHz external crystal oscillator for the maximum clock speed the AVR will run at. Note how much slower this is than a PC – and remember this when you write your code! Microcontrollers differ in the number of clock ticks it actually takes to execute an instruction. This is why we use the term MIPS or million instructions per second, to better describe the “speed” of a CPU. One feature of the AVR is that most instructions are executed in one clock cycle, therefore it runs at around 1 MIPS / MHz. Other microcontrollers running at 14.746MHz may have less MIPS.
Computer architecture is a huge topic in itself. We will just develop a general picture of how the AVR microcontroller works. It has a Harvard architecture. This means that the program and data are stored in separate memory spaces which are accessible simultaneously. Therefore, while one instruction is being executed, the next one can be fetched. This is partly how one execution per clock cycle can be achieved. With other microcontroller architectures, there is only 1 way to access memory, so executions and program instruction access must be done alternately.
The AVR's program is stored in nonvolatile (persistent on power-down) programmable Flash memory. It is divided into 2 sections. The first section is the Application Flash section. It is where the AVR's program is stored. The second section is called the Boot Flash section and can be set to execute immediately when the device is powered up. The Flash has a special property of being able to write over itself, which may seem like a stupid thing to do.
But the Boot Flash section can come in handy if it is programmed with a small program that takes data from the serial port and writes it into the Application Flash section. Such a program is called a bootloader, and it allows the device to be programmed from a regular serial port, rather than using a complicated or expensive programmer circuit. For commercial devices, it makes so-called “firmware upgrades” very easy. We will use a bootloader on our ATmega32. This program is already loaded on the chip. When the device is powered on, it simply waits for a few seconds to see if programming instructions are coming from the serial port. If they don't come, it branches to the top of the Application Flash section and whatever program resides there will run normally.
Figure 1 - AVR's Flash Program Memory (after ATmega32 datasheet p. 14)
All the code you write is linked, assembled and otherwise compiled into hex code, (also known as byte code) which is a series of hexadecimal numbers that are interpreted as instructions by the microcontroller. The beauty of a high-level language like C is that you don't need to understand the details of the microcontroller architecture.
The AVR's data memory is volatile RAM. It is organized in 8-bit registers.
All information in the microcontroller, from the program memory, the timer information, to the state on any of input or output pins, is stored in registers. Registers are like shelves in the bookshelf of processor memory. In an 8-bit processor, like the ATmega32 we are using, the shelf can hold 8 books, where each book is a one bit binary number, a 0 or 1. Each shelf has an address so that the controller knows where to find it. Some registers, such as those in RAM, are for storing general data. Others have specific functions like controlling the A/D converters or timers, or setting and getting values on the I/O pins. Registers in RAM can be read or written. Other registers may be read-only or write-only. In fact, some specialized registers may have certain bits that are only to be read, and certain bits that are only to be written.
Bits and bytes
One byte is made up of 8 bits, and there are 256 unique possible values for each byte. All the information in the microcontroller is stored in byte-size chunks; since it would be tedious to write out all the 1's and 0's in binary format, we represent each byte of information as a two-digit hexadecimal number. For example, 11110011 in binary=243 in decimal=F3 in hexadecimal. We usually write 0xF3 to clue people in that the numbers are in base 16. Memory address locations are normally given in hexadecimal, but with a preceding $ to indicate an address, as opposed to a value, eg $03DF. Note that with 2KB of RAM, we need 2 bytes to specify all the addresses in the ATmega32.
Figure 2 - Registers in RAM
Registers on the AVR Microcontroller
The data registers on the AVR, like the Flash program memory, are organized as a continuous set of addresses, but with different specialized sections. Unlike the Flash program memory, these different sections refer physically to completely different types of memory. The largest set of registers refer to the RAM. This is volatile memory for storing data such as variables or strings to be outputted.
Another set of registers are the general-purpose working registers. The ATmega32 has 32 of these, and they hold a small amount of data which is directly acted on by the Arithmetic Logic Unit (ALU) when it performs calculations. When the program gives an instruction to add 2 numbers, these must be fed into the ALU from the working registers. If the numbers are in RAM, they must first be moved into the ALU. A good compiler will make sure that transfers from RAM to the working registers are minimized, because RAM access is generally slow.
The final set of registers have special functions for doing things like I/O, timing, or analog-to-digital conversion. All of these registers share a Data Bus to transfer data between them. The output of the ALU is also on this Bus, so that the results of instructions can be used to store new values in RAM, create output, put new values in the working registers or start a timer, for example. The data bus can carry 8 bits of information at a time.
In order to read and write to the input and output pins on the microcontroller, however, you will need to know a little about the input and output architecture. The reason that the 32 IO pins of the ATmega32 are divided into 4 ports of 8 pins is that this allows the state of the pins to be represented by 4 bytes, named PORTA, PORTB, PORTC and PORTD. Each physical IO pin corresponds to a logical bit on the port. The value of pin3 on Port D lives in slot 3 on the PORTD bookshelf.
Setting and Clearing IO
The term for assigning a logical high value (1) to a pin is setting, and the term for assigning a logical low value (0) is clearing. You can write to the IO registers one byte at a time, for example, by assigning the value 0xBB to PORTB, or by assigning one bit at a time, clearing PB2 (portB bit 2) and PB6, and setting the rest.
Not all of the I/O registers are physical pins. Since the IO pins are configurable to be either input or output, the controller needs some place to store the directionality of each bit. These are stored in the Data Direction Registers. Like all the other registers, the DDRs have 1's and 0's, but its 1's and 0's indicate whether the corresponding port pin is an input (0) or output (1). This register acts as an I/O port librarian, controlling who is allowed to change the data on the shelves, bit by bit. So, if I set DDRA to 0xF0 (a.k.a. 1111 0000), this means that bits 7-4 on PORTA are set to output, and bits 3-0 are set to input. If PORTA was originally set to 0xFF (a.k.a 1111 1111) and I subsequently write 0xAA (a.k.a 1010 1010) to PORTA, I should read 0xAF on the PORTA.
The pins on the different ports also have different features, much as each of the Superfriends had different super powers. For instance, PORT A can be bi-directional IO, but it can also be used for analog input. This functionality can be very useful for reading sensor data. To enable the switching between analog and digital inputs, a special register called ADCSR (Analog to Digital Control & Status Register) is needed; each bit in ADCSR sets some aspect of the AD operations. You do not need to set these bits explicitly, but you need to be aware that you should run the library commands (such as a2dInit())in the appropriate library (a2d.h) to tell the processor to set these registers. A more complete description of all the ports and their special magical powers can be found in the ATmega 32 summary.
Figure 3 - AVR Architecture (after ATmega32 datasheet p. 6)
==Program Execution== - Hex Code is what is stored in the Flash Program memory. When the program runs, the hex code is accessed by the Program Counter. It loads the next instruction into a special Instruction Register. The operands of the instruction are fed into the ALU , while the instruction itself is decoded and then executed by the ALU.
|
Damarrio C. Holloway
Loci of Centers
Problem #4: Discuss the loci of the centers of the tangent circles for all cases you construct.
The investigation begins with the problem of constructing a tangent circle (Blue) to two given circles (Green). We first discover the center of the desired circle is collinear to the line formed by the center and the specified point ÔRŐ on the given circle.
Figure 2 below shows the construction of a traced point, which is the center of the desired tangent circle. This trace (Red) displays the location or loci of all centers of the different circles (Blue) that are tangent to the two given circles (Green).
Figure 2 Figure 3
Figure 3 displays the elliptical shape, which are the loci of all of the centers. The green line in the picture traces an envelop of line, always tangent to the loci of the centers-ellipse.
Now lets discover another set of circles tangent to the two given circles by using similar, but different tactics.
In Figure 4, the small internal circle has the same radius as the dashed circle with center on the designated point on the larger circle. The first tangent circle was formed by the perpendicular bisector of the segment connecting the center of the smaller given circle and the external point of the copied dashed circle that is collinear the center of the large given circle an the designated point. The center of the above tangent circle (Blue) still lies on the line k, containing the center and designated point of the larger given circle. Instead of using the external intersection point on the copied circle, we use the segment formed by the internal intersection of line k on the dashed circle c2 and the center of the smaller green circle c1. A perpendicular bisector was form through the midpoint of the segment, creating a desired intersection with line k, which is the center of our tangent circle.
Figure 5 Figure 6
Figure 5 and Figure 6 show the path the centers will travel when the tangent circle approaches different tangent points on the given circles. This locus of centers again forms an ellipse.
We have shown tangent circles for two given circles, where a smaller circles lies within the interior of another circle, not sharing the same center. Now letŐs explore two different tangent circles for where the given circles intersect.
In figure 6, above, the given circles are again in green where they intersect at two different points. They have two different tangent circles: an interior tangent circle in Blue and an exterior tangent circle in Red. As the earlier examples, the centers of the tangent circles are collinear to the line formed by the center of the large given circle and the designated point of intersection.
LetŐs look at the locus of the centers of the tangent circles.
In Figure 7, the locus of centers for the tangent circles is an ellipse. In Figure 8, the locus of centers is in a parabolic shape. Click on the picture titles to animate objects in GSP. The designated tangent point on the given circle should be the animated point, while tracing the centers of the tangent circles.
|
Wide Area Networks
The following paper is a report into the characteristics and construction of a Wide Area Network (WAN). It will start by giving a basic description of a Wide Area Network and then go on to investigate, in detail, the actual structure and workings of a WAN. The intended purpose of the report is to allow individuals with or without prior knowledge of the researched area to have better understanding of the report subject.
Characteristics of a WAN
The term WAN (Wide Area Network) refers to a network, which covers a large geographical area, and use communications circuits to connect the intermediate nodes. WAN's are used across a city, country or even around the globe. Typical transmission rates are 2 Mbps, 34 Mbps, 45 Mbps, 155 Mbps, and 625 Mbps but can be even more than this. A WAN consists, basically, of two or more LANs (Local Area Network), which are connected to each other using WAN technologies under a high-speed communication.
The largest and probably the best example of a Wide Area Network is the Internet. WAN technologies generally function at the lower three layers of the OSI Model: the Physical Layer, the Data Link Layer, and the Network Layer (all layers are shown in order below). . Bridges, routers and multiplexers and other devices are used for correct data transmission. Some WANs are extremely widespread (global), but most do not provide true global coverage. Organizations supporting WANs using the Internet Protocol are known as Network Service Providers (NSP's). These form the core of the Internet.
The OSI (Open System Interconnection) Model: -
Addressing and routing
On a Wide Area Network, the routing of data is done mainly using TCP/IP (Transmission Control protocol/Internet Protocol).
TCP is the protocol, which "enables two hosts to establish a connection and exchange streams of data. TCP guarantees delivery of data and also guarantees that packets will be delivered in the same order in which they were sent."
Every device on a network, or indeed any device in general, has it's own individual IP Address. An IP Address is a unique number in the format of four sets of digits, each ranging from 0 to 255, for example 18.104.22.168. Hardware used to assist the correct routing and flow of network traffic would consist of devices like Hubs, Bridges, Routers, Switches and Multiplexers.
This is a technique(s) used to detect if there is any other traffic on the network before data is sent. Devices that are communicating data across a network rely on congestion control to determine when to send or delay the transmission of data packets in order to stop contention and collisions. The main technology used for doing this is CSMA/CD Carrier Sense Multiple Access/Collision. If a collision is detected then the sending device will wait for a certain length of time and then attempts to re-send the message.
PTO switched services
Circuit switched - Circuit-switched is a type of network in which a physical path is obtained for a single connection between two end-points in the network for the duration of the connection such as an ordinary telephone service. This is made possible by the telephone service dedicating a specific physical path to the number that you have called for the duration of your call and during this time, no one else can use the physical lines. Some examples of Circuit Switching techniques are shown here:
PSDN - Packet Switched Data Network
PSDN is a type of Network used in public telephone networks. Data is transformed into packets of a fixed length, which are fixed with the destination and source addresses. The packets are then transported across the network using routers, which read the addresses.
ISDN - Integrated Services Digital Network
ISDN is another international standard used for sending data including voice and video over telephone or digital lines. The two types of ISDN are:
÷Primary Rate Interface (PRI) - Made up of two 64-Kbps B-channels and one D-channel for transmitting control information.
÷Basic Rate Interface (BRI) - Made up of Primary Rate Interface (PRI) -- consists of 23 B-channels and one D-channel (U.S.) or 30 B-channels and one D-channel (Europe).
A Leased Line is a high-speed, dedicated Internet connection which is rented (leased) from a telecommunications provider i.e. NTL, BT, etc. The speed of the connection can be chosen and upgraded as required to suit the needs of your business. A leased line will also allow the client to run their own email and web servers without the added cost of an ISP. Some advantages and disadvantages of leased lines are shown here: -
Leased line advantages
ÃÂUpload and download speeds are the same.
ÃÂChoice of bandwidths.
ÃÂReadily available SLAs.
ÃÂGuaranteed bandwidth for critical business usage.
ÃÂSuitable for web hosting.
Leased line disadvantages
ÃÂRelatively expensive to install and rent.
ÃÂNot suitable for single or home workers.
Packet switching is when data to be transferred exceeding a networks maximum length is broken up into shorter units, known as packets; the packets, each with an associated header, are then transmitted individually through the network. "The fundamental difference in packet communication is that the data is formed into packets with a pre-defined header format (i.e. PCI), and well-known "idle" patterns which are used to occupy the link when there is no data to be communicated"
Some of the more commonly used packet switching methods are detailed below.
X.25 technology is connection orientated and will allow computers on different public networks to communicate through an intermediary computer specifically at the Network Layer of the OSI. It is a packet switched device that is used to exchange data between the DCE (Data Circuit Equipment) and the DTE (Data Terminal Equipment) in Public Switched Telephone Network. The protocols used by X.25 operate in close relation to the Data Link and Physical Layer protocols defined in the OSI Model.
Frame relay operates at the Data Link Layer of the OSI Model. One way to look upon Frame Relay is that it was originally developed as a "poor man's" version of the X.25 technology. It is often sold by Telecommunications companies to businesses looking for a cheaper alternative than leased lines. Frame Relay is being displaced by ATM and native IP based products, including IP virtual private networks. It uses no error correcting mechanisms, meaning that it is capable of transmitting Layer 2 information more rapidly than other WAN protocols.
SMDS - Switched Multi-megabit Data Service
SMDS is a high-speed, packet-switched, datagram-based WAN networking technology, which is used for communication over PDN's (public data networks).
SMDS data units are large enough to encapsulate entire 802.3, 802.5, and FDDI (Fibre Distributed Data Interface) frames.
Mobile and broadband services
Asynchronous Transfer Mode is a technology based around the concept of sending data in packets, which are of a fixed size. This technology also uses a relatively small cell size compared to others from older technologies, which in turn allows ATM equipment to transfer many types of data over one Network (i.e. Video, Audio, Jpeg) without any one type of data using all the resources. Whereas TCP/IP packets can take different routes from source to destination, ATM creates a fixed route between two points at the outset of data transfer. This makes it easier to track data usage across a Network.
When you are purchasing an ATM service, you would normally choose between these four different types of service:
CBR Constant Bit Rate - Requires the user to determine a fixed bandwidth requirement at the time the connection is set up so that the data can be sent in a steady stream. CBR service is often used when transmitting fixed-rate uncompressed video.
VBR Variable Bit Rate - Allows users to specify a throughput capacity and a sustained rate but data is not sent evenly. VBR is often used when transmitting compressed voice and video data, such as videoconferencing.
ABR Available Bit Rate - Adjusts the amount of bandwidth based on the amount of traffic in the network. ABR service provides a guaranteed minimum bandwidth capacity but allows data to be burst at higher capacities when the network is free.
UBR Unspecified Bit Rate - UBR does not guarantee any throughput levels and uses only available bandwidth. UBR is often used when transmitting delay tolerable data.
DSL (Digital Subscriber Line)
The "x" at the beginning of this description signifies the whole family of technologies encapsulated by DSL. This is a fairly new technology that is still in development and is aimed at home users. Bandwidth will vary depending on the distance of the user from the Telephone company exchange. This is because DSL is distance sensitive and generally only works within a certain distance (16000 feet). The rest of the family of technologies are listed below: -
HDSL high bit-rate DSL
VDSL very high bit-rate DSL
SDSL single-line DSL
ADSL asymmetric DSL
RADSL rate adaptive DSL
General Packet Radio Service is used as a data services upgrade to any GSM (Global System for Mobile Communication) network to allow compatibility with the Internet. GPRS optimises the use of network and radio resources. GPRS has two key benefits, which are a better use of radio and network resources and completely transparent IP support.
There are a number of different GPRS protocols, some of which are:
NS - Network Service
BSSGP - Base station system GPRS Protocol
GTP - GPRS Tunnelling Protocol
LLC - Logical Link Control layer protocol for GPRS
SNDCP - Sub-Network Dependant Convergence Protocol
Universal Mobile Telecommunication System (UMTS) is one of the Third Generation (3G) mobile systems being developed. New mobile telephone technologies, such as camera phones are some of the first to feature UMTS. It is based on WCDMA (Wideband Code Division Multiple Access), which is the radio interface for UMTS.
"Today's cellular telephone systems are mainly circuit-switched, with connections always dependent on circuit availability. Packet-switched connection, using the Internet Protocol (IP), means that a virtual connection is always available to any other end point in the network. It will also make it possible to provide new services, such as alternative billing methods (pay-per-bit, pay-per-session, flat rate, asymmetric bandwidth, and others). The higher bandwidth of UMTS also promises new services, such as video conferencing and promises to realise the Virtual Home Environment (VHE) in which a roaming user can have the same services to which the user is accustomed when at home or in the office, through a combination of transparent terrestrial and satellite connections. "
÷Provides an introduction to cellular networks and digital communications
÷Covers the air interface, radio access network and core network
÷Explains the Release '99 specifications clearly and effectively
÷Discusses UMTS services and future services beyond 3G
÷Features numerous problems and solutions in order to aid understanding
H.323 is part of a family of ITU-T recommendations called H.32x that provides multimedia communication services over a variety of networks. H.323 is the international standard and the market leader for IP Telephony, including the market leader for IP-based video conferencing systems and IP-based long distance and toll-bypass H.323 is a standard that specifies the components, protocols and procedures that provide multimedia communication services--real-time audio, video, and data communications--over packet networks, including Internet protocol (IP)-based networks.
It is also highly regarded by companies that include Microsoft, Cisco, Siemens and Lucent.
Voice over Internet Protocol
VoIP technology enables voice transfer over Next Generation networks using protocols such as H.32x, SIP (Session Initiation Protocol) and MGCP (Media Gateway Control Protocol). The market for VoIP is expected to escalate at a considerable rate over the next five years or so. It is however a relatively new technology and there are many obstacles and hurdles in the way for the developers and equipment providers, who include Cisco and Netspeak. Some of these being voice quality, packet loss and latency.
..\WAN\Data Comms PDFs\h323.pdf
..\WAN\Data Comms PDFs\x25.pdf
..\WAN\Data Comms PDFs\gprs.pdf
..\WAN\Data Comms PDFs\ATM.pdf
..\WAN\Data Comms PDFs\frame_relay.pdf
Dick, David (2002) The P.C. Support Handbook. Kirkintilloch:
|
September 24, 2010
New supercomputer simulations tracking the interactions of thousands of dust grains show what the solar system might look like to alien astronomers searching for planets. The models also provide a glimpse of how this view might have changed as our planetary system matured.
"The planets may be too dim to detect directly, but aliens studying the solar system could easily determine the presence of Neptune — its gravity carves a little gap in the dust," said Marc Kuchner from NASA's Goddard Space Flight Center in Greenbelt, Maryland. "We're hoping our models will help us spot Neptune-sized worlds around other stars."
The dust originates in the Kuiper Belt, a cold-storage zone beyond Neptune where millions of icy bodies, including Pluto, orbit the Sun. Scientists believe the region is an older, leaner version of the debris disks they've seen around stars like Vega and Fomalhaut.
"Our new simulations also allow us to see how dust from the Kuiper Belt might have looked when the solar system was much younger," said Christopher Stark from the Carnegie Institution for Science in Washington, D.C. "In effect, we can go back in time and see how the distant view of the solar system may have changed."
Kuiper Belt objects occasionally crash into each other, and this relentless bump-and-grind produces a flurry of icy grains. But tracking how this dust travels through the solar system isn't easy because small particles are subject to a variety of forces in addition to the gravitational pull of the Sun and planets.
The grains are affected by the solar wind, which works to bring dust closer to the Sun, and sunlight, which can either pull dust inward or push it outward. Exactly what happens depends on the size of the grain. The particles also run into each other, and these collisions can destroy the fragile grains.
"People felt that the collision calculation couldn't be done because there are just too many of these tiny grains too keep track of," Kuchner said. "We found a way to do it, and that has opened up a whole new landscape."
With the help of NASA's Discover supercomputer, the researchers kept tabs on 75,000 dust particles as they interacted with the outer planets, sunlight, the solar wind, and each other.
The size of the model dust ranged from about the width of a needle's eye (0.05 inch or 1.2 millimeters) to more than a thousand times smaller, similar in size to the particles in smoke. During the simulation, the grains were placed into one of three types of orbits found in today's Kuiper Belt at a rate based on current ideas of how quickly dust is produced.
From the resulting data, the researchers created synthetic images representing infrared views of the solar system seen from afar.
Through gravitational effects called resonances, Neptune wrangles nearby particles into preferred orbits. This is what creates the clear zone near the planet as well as dust enhancements that precede and follow it around the Sun.
"One thing we've learned is that, even in the present-day solar system, collisions play an important role in the Kuiper Belt's structure," Stark said. That's because collisions tend to destroy large particles before they can drift too far from where they're made. This results in a relatively dense dust ring that straddles Neptune's orbit.
To get a sense of what younger, heftier versions of the Kuiper Belt might have looked like, the team sped up the dust production rate. In the past, the Kuiper Belt contained many more objects that crashed together more frequently, generating dust at a faster pace. With more dust particles came more frequent grain collisions.
Using separate models that employed progressively higher collision rates, the team produced images roughly corresponding to dust generation that was 10, 100, and 1,000 times more intense than in the original model. The scientists estimate the increased dust reflects conditions when the Kuiper Belt was, respectively, 700 million, 100 million, and 15 million years old.
"We were just astounded by what we saw," Kuchner said.
As collisions become increasingly important, the likelihood that large dust grains will survive to drift out of the Kuiper Belt drops sharply. Stepping back through time, today's broad dusty disk collapses into a dense, bright ring that bears more than a passing resemblance to rings seen around other stars, especially Fomalhaut.
"The amazing thing is that we've already seen these narrow rings around other stars," Stark said. "One of our next steps will be to simulate the debris disks around Fomalhaut and other stars to see what the dust distribution tells us about the presence of planets."
The researchers also plan to develop a more complete picture of the solar system's dusty disk by modeling additional sources closer to the Sun, including the main asteroid belt and the thousands of so-called Trojan asteroids corralled by Jupiter's gravity.
To learn more about Kuchner and Stark's research, check out "How to find planets hidden by dust" in our August 2010 issue of Astronomy.
|
Locke’s Political Philosophy
John Locke (1632–1704) is among the most influential political philosophers of the modern period. In the Two Treatises of Government, he defended the claim that men are by nature free and equal against claims that God had made all people naturally subject to a monarch. He argued that people have rights, such as the right to life, liberty, and property, that have a foundation independent of the laws of any particular society. Locke used the claim that men are naturally free and equal as part of the justification for understanding legitimate political government as the result of a social contract where people in the state of nature conditionally transfer some of their rights to the government in order to better ensure the stable, comfortable enjoyment of their lives, liberty, and property. Since governments exist by the consent of the people in order to protect the rights of the people and promote the public good, governments that fail to do so can be resisted and replaced with new governments. Locke is thus also important for his defense of the right of revolution. Locke also defends the principle of majority rule and the separation of legislative and executive powers. In the Letter Concerning Toleration, Locke denied that coercion should be used to bring people to (what the ruler believes is) the true religion and also denied that churches should have any coercive power over their members. Locke elaborated on these themes in his later political writings, such as the Second Letter on Toleration and Third Letter on Toleration.
For a more general introduction to Locke’s history and background, the argument of the Two Treatises, and the Letter Concerning Toleration, see Section 1, Section 4, and Section 5, respectively, of the main entry on John Locke in this encyclopedia. The present entry focuses on seven central concepts in Locke’s political philosophy.
- 1. Natural Law and Natural Right
- 2. State of Nature
- 3. Property
- 4. Consent, Political Obligation, and the Ends of Government
- 5. Locke and Punishment
- 6. Separation of Powers and the Dissolution of Government
- 7. Toleration
- Academic Tools
- Other Internet Resources
- Related Entries
Perhaps the most central concept in Locke’s political philosophy is his theory of natural law and natural rights. The natural law concept existed long before Locke as a way of expressing the idea that there were certain moral truths that applied to all people, regardless of the particular place where they lived or the agreements they had made. The most important early contrast was between laws that were by nature, and thus generally applicable, and those that were conventional and operated only in those places where the particular convention had been established. This distinction is sometimes formulated as the difference between natural law and positive law.
Natural law is also distinct from divine law in that the latter, in the Christian tradition, normally referred to those laws that God had directly revealed through prophets and other inspired writers. Natural law can be discovered by reason alone and applies to all people, while divine law can be discovered only through God’s special revelation and applies only to those to whom it is revealed and who God specifically indicates are to be bound. Thus some seventeenth-century commentators, Locke included, held that not all of the 10 commandments, much less the rest of the Old Testament law, were binding on all people. The 10 commandments begin “Hear O Israel” and thus are only binding on the people to whom they were addressed (Works 6:37). As we will see below, even though Locke thought natural law could be known apart from special revelation, he saw no contradiction in God playing a part in the argument, so long as the relevant aspects of God’s character could be discovered by reason alone. In Locke’s theory, divine law and natural law are consistent and can overlap in content, but they are not coextensive. Thus there is no problem for Locke if the Bible commands a moral code that is stricter than the one that can be derived from natural law, but there is a real problem if the Bible teaches what is contrary to natural law. In practice, Locke avoided this problem because consistency with natural law was one of the criteria he used when deciding the proper interpretation of Biblical passages.
In the century before Locke, the language of natural rights also gained prominence through the writings of such thinkers as Grotius, Hobbes, and Pufendorf. Whereas natural law emphasized duties, natural rights normally emphasized privileges or claims to which an individual was entitled. There is considerable disagreement as to how these factors are to be understood in relation to each other in Locke’s theory. Leo Strauss, and many of his followers, take rights to be paramount, going so far as to portray Locke’s position as essentially similar to that of Hobbes. They point out that Locke defended a hedonist theory of human motivation (Essay 2.20) and claim that he must agree with Hobbes about the essentially self-interested nature of human beings. Locke, they claim, recognizes natural law obligations only in those situations where our own preservation is not in conflict, further emphasizing that our right to preserve ourselves trumps any duties we may have.
On the other end of the spectrum, more scholars have adopted the view of Dunn, Tully, and Ashcraft that it is natural law, not natural rights, that is primary. They hold that when Locke emphasized the right to life, liberty, and property he was primarily making a point about the duties we have toward other people: duties not to kill, enslave, or steal. Most scholars also argue that Locke recognized a general duty to assist with the preservation of mankind, including a duty of charity to those who have no other way to procure their subsistence (Two Treatises 1.42). These scholars regard duties as primary in Locke because rights exist to ensure that we are able to fulfill our duties. Simmons takes a position similar to the latter group, but claims that rights are not just the flip side of duties in Locke, nor merely a means to performing our duties. Instead, rights and duties are equally fundamental because Locke believes in a “robust zone of indifference” in which rights protect our ability to make choices. While these choices cannot violate natural law, they are not a mere means to fulfilling natural law either. Brian Tienrey questions whether one needs to prioritize natural law or natural right since both typically function as corollaries. He argues that modern natural rights theories are a development from medieval conceptions of natural law that included permissions to act or not act in certain ways.
There have been some attempts to find a compromise between these positions. Michael Zuckert’s version of the Straussian position acknowledges more differences between Hobbes and Locke. Zuckert still questions the sincerity of Locke’s theism, but thinks that Locke does develop a position that grounds property rights in the fact that human beings own themselves, something Hobbes denied. Adam Seagrave has gone a step further. He argues that the contradiction between Locke’s claim that human beings are owned by God and that human beings own themselves is only apparent. Based on passages from Locke’s other writings (especially the Essay Concnerning Human Understanding) In the passages about divine ownership, Locke is speaking about humanity as a whole, while in the passages about self-ownership he is taking about individual human beings with the capacity for property ownership. God created human beings who are capable of having property rights with respect to one another on the basis of owning their labor. Both of them emphasize differences between Locke’s use of natural rights and the earlier tradition of natural law.
Another point of contestation has to do with the extent to which Locke thought natural law could, in fact, be known by reason. Both Strauss and Peter Laslett, though very different in their interpretations of Locke generally, see Locke’s theory of natural law as filled with contradictions. In the Essay Concerning Human Understanding, Locke defends a theory of moral knowledge that negates the possibility of innate ideas (Essay Book 1) and claims that morality is capable of demonstration in the same way that Mathematics is (Essay 3.11.16, 4.3.18–20). Yet nowhere in any of his works does Locke make a full deduction of natural law from first premises. More than that, Locke at times seems to appeal to innate ideas in the Second Treatise (2.11), and in The Reasonableness of Christianity (Works 7:139) he admits that no one has ever worked out all of natural law from reason alone. Strauss infers from this that the contradictions exist to show the attentive reader that Locke does not really believe in natural law at all. Laslett, more conservatively, simply says that Locke the philosopher and Locke the political writer should be kept very separate.
Many scholars reject this position. Yolton, Colman, Ashcraft, Grant, Simmons, Tuckness and others all argue that there is nothing strictly inconsistent in Locke’s admission in The Reasonableness of Christianity. That no one has deduced all of natural law from first principles does not mean that none of it has been deduced. The supposedly contradictory passages in the Two Treatises are far from decisive. While it is true that Locke does not provide a deduction in the Essay, it is not clear that he was trying to. Section 4.10.1–19 of that work seems more concerned to show how reasoning with moral terms is possible, not to actually provide a full account of natural law. Nonetheless, it must be admitted that Locke did not treat the topic of natural law as systematically as one might like. Attempts to work out his theory in more detail with respect to its ground and its content must try to reconstruct it from scattered passages in many different texts.
To understand Locke’s position on the ground of natural law it must be situated within a larger debate in natural law theory that predates Locke, the so-called “voluntarism-intellectualism,” or “voluntarist-rationalist” debate. At its simplest, the voluntarist declares that right and wrong are determined by God’s will and that we are obliged to obey the will of God simply because it is the will of God. Unless these positions are maintained, the voluntarist argues, God becomes superfluous to morality since both the content and the binding force of morality can be explained without reference to God. The intellectualist replies that this understanding makes morality arbitrary and fails to explain why we have an obligation to obey God.
With respect to the grounds and content of natural law, Locke is not completely clear. On the one hand, there are many instances where he makes statements that sound voluntarist to the effect that law requires a law giver with authority (Essay 1.3.6, 4.10.7). Locke also repeatedly insists in the Essays on the Law of Nature that created beings have an obligation to obey their creator (ELN 6). On the other hand there are statements that seem to imply an external moral standard to which God must conform (Two Treatises 2.195; Works 7:6). Locke clearly wants to avoid the implication that the content of natural law is arbitrary. Several solutions have been proposed. One solution suggested by Herzog makes Locke an intellectualist by grounding our obligation to obey God on a prior duty of gratitude that exists independent of God. A second option, suggested by Simmons, is simply to take Locke as a voluntarist since that is where the preponderance of his statements point. A third option, suggested by Tuckness (and implied by Grant), is to treat the question of voluntarism as having two different parts, grounds and content. On this view, Locke was indeed a voluntarist with respect to the question “why should we obey the law of nature?” Locke thought that reason, apart from the will of a superior, could only be advisory. With respect to content, divine reason and human reason must be sufficiently analogous that human beings can reason about what God likely wills. Locke takes it for granted that since God created us with reason in order to follow God’s will, human reason and divine reason are sufficiently similar that natural law will not seem arbitrary to us.
Those interested in the contemporary relevance of Locke’s political theory must confront its theological aspects. Straussians make Locke’s theory relevant by claiming that the theological dimensions of his thought are primarily rhetorical; they are “cover” to keep him from being persecuted by the religious authorities of his day. Others, such as Dunn, take Locke to be of only limited relevance to contemporary politics precisely because so many of his arguments depend on religious assumptions that are no longer widely shared. More recently a number of authors, such as Simmons and Vernon, have tried to separate the foundations of Locke’s argument from other aspects of it. Simmons, for example, argues that Locke’s thought is over-determined, containing both religious and secular arguments. He claims that for Locke the fundamental law of nature is that “as much as possible mankind is to be preserved” (Two Treatises 135). At times, he claims, Locke presents this principle in rule-consequentialist terms: it is the principle we use to determine the more specific rights and duties that all have. At other times, Locke hints at a more Kantian justification that emphasizes the impropriety of treating our equals as if they were mere means to our ends. Waldron, in his most recent work on Locke, explores the opposite claim: that Locke’s theology actually provides a more solid basis for his premise of political equality than do contemporary secular approaches that tend to simply assert equality.
With respect to the specific content of natural law, Locke never provides a comprehensive statement of what it requires. In the Two Treatises, Locke frequently states that the fundamental law of nature is that as much as possible mankind is to be preserved. Simmons argues that in Two Treatises 2.6 Locke presents 1) a duty to preserve one’s self, 2) a duty to preserve others when self-preservation does not conflict, 3) a duty not to take away the life of another, and 4) a duty not to act in a way that “tends to destroy” others. Libertarian interpreters of Locke tend to downplay duties of type 1 and 2. Locke presents a more extensive list in his earlier, and unpublished in his lifetime, Essays on the Law of Nature. Interestingly, Locke here includes praise and honor of the deity as required by natural law as well as what we might call good character qualities.
Locke’s concept of the state of nature has been interpreted by commentators in a variety of ways. At first glance it seems quite simple. Locke writes “want [lack] of a common judge, with authority, puts all persons in a state of nature” and again, “Men living according to reason, without a common superior on earth, to judge between them, is properly the state of nature.” (Two Treatises 2.19) Many commentators have taken this as Locke’s definition, concluding that the state of nature exists wherever there is no legitimate political authority able to judge disputes and where people live according to the law of reason. On this account the state of nature is distinct from political society, where a legitimate government exists, and from a state of war where men fail to abide by the law of reason.
Simmons presents an important challenge to this view. Simmons points out that the above statement is worded as a sufficient rather than necessary condition. Two individuals might be able, in the state of nature, to authorize a third to settle disputes between them without leaving the state of nature, since the third party would not have, for example, the power to legislate for the public good. Simmons also claims that other interpretations often fail to account for the fact that there are some people who live in states with legitimate governments who are nonetheless in the state of nature: visiting aliens (2.9), children below the age of majority (2.15, 118), and those with a “defect” of reason (2.60). He claims that the state of nature is a relational concept describing a particular set of moral relations that exist between particular people, rather than a description of a particular geographical territory. The state of nature is just the way of describing the moral rights and responsibilities that exist between people who have not consented to the adjudication of their disputes by the same legitimate government. The groups just mentioned either have not or cannot give consent, so they remain in the state of nature. Thus A may be in the state of nature with respect to B, but not with C.
Simmons’ account stands in sharp contrast to that of Strauss. According to Strauss, Locke presents the state of nature as a factual description of what the earliest society is like, an account that when read closely reveals Locke’s departure from Christian teachings. State of nature theories, he and his followers argue, are contrary to the Biblical account in Genesis and evidence that Locke’s teaching is similar to that of Hobbes. As noted above, on the Straussian account Locke’s apparently Christian statements are only a façade designed to conceal his essentially anti-Christian views. According to Simmons, since the state of nature is a moral account, it is compatible with a wide variety of social accounts without contradiction. If we know only that a group of people are in a state of nature, we know only the rights and responsibilities they have toward one another; we know nothing about whether they are rich or poor, peaceful or warlike.
A complementary interpretation is made by John Dunn with respect to the relationship between Locke’s state of nature and his Christian beliefs. Dunn claimed that Locke’s state of nature is less an exercise in historical anthropology than a theological reflection on the condition of man. On Dunn’s interpretation, Locke’s state of nature thinking is an expression of his theological position, that man exists in a world created by God for God’s purposes but that governments are created by men in order to further those purposes.
Locke’s theory of the state of nature will thus be tied closely to his theory of natural law, since the latter defines the rights of persons and their status as free and equal persons. The stronger the grounds for accepting Locke’s characterization of people as free, equal, and independent, the more helpful the state of nature becomes as a device for representing people. Still, it is important to remember that none of these interpretations claims that Locke’s state of nature is only a thought experiment, in the way Kant and Rawls are normally thought to use the concept. Locke did not respond to the argument “where have there ever been people in such a state” by saying it did not matter since it was only a thought experiment. Instead, he argued that there are and have been people in the state of nature. (Two Treatises 2.14) It seems important to him that at least some governments have actually been formed in the way he suggests. How much it matters whether they have been or not will be discussed below under the topic of consent, since the central question is whether a good government can be legitimate even if it does not have the actual consent of the people who live under it; hypothetical contract and actual contract theories will tend to answer this question differently.
Locke’s treatment of property is generally thought to be among his most important contributions in political thought, but it is also one of the aspects of his thought that has been most heavily criticized. There are important debates over what exactly Locke was trying to accomplish with his theory. One interpretation, advanced by C.B. Macpherson, sees Locke as a defender of unrestricted capitalist accumulation. On Macpherson’s interpretation, Locke is thought to have set three restrictions on the accumulation of property in the state of nature: 1) one may only appropriate as much as one can use before it spoils (Two Treatises 2.31), 2) one must leave “enough and as good” for others (the sufficiency restriction) (2.27), and 3) one may (supposedly) only appropriate property through one’s own labor (2.27). Macpherson claims that as the argument progresses, each of these restrictions is transcended. The spoilage restriction ceases to be a meaningful restriction with the invention of money because value can be stored in a medium that does not decay (2.46–47). The sufficiency restriction is transcended because the creation of private property so increases productivity that even those who no longer have the opportunity to acquire land will have more opportunity to acquire what is necessary for life (2.37). According to Macpherson’s view, the “enough and as good” requirement is itself merely a derivative of a prior principle guaranteeing the opportunity to acquire, through labor, the necessities of life. The third restriction, Macpherson argues, was not one Locke actually held at all. Though Locke appears to suggest that one can only have property in what one has personally labored on when he makes labor the source of property rights, Locke clearly recognized that even in the state of nature, “the Turfs my Servant has cut” (2.28) can become my property. Locke, according to Macpherson, thus clearly recognized that labor can be alienated. As one would guess, Macpherson is critical of the “possessive individualism” that Locke’s theory of property represents. He argues that its coherence depends upon the assumption of differential rationality between capitalists and wage-laborers and on the division of society into distinct classes. Because Locke was bound by these constraints, we are to understand him as including only property owners as voting members of society.
Macpherson’s understanding of Locke has been criticized from several different directions. Alan Ryan argued that since property for Locke includes life and liberty as well as estate (Two Treatises 2.87), even those without land could still be members of political society. The dispute between the two would then turn on whether Locke was using property in the more expansive sense in some of the crucial passages. James Tully attacked Macpherson’s interpretation by pointing out that the First Treatise specifically includes a duty of charity toward those who have no other means of subsistence (1.42). While this duty is consistent with requiring the poor to work for low wages, it does undermine the claim that those who have wealth have no social duties to others.
Tully also argued for a fundamental reinterpretation of Locke’s theory. Previous accounts had focused on the claim that since persons own their own labor, when they mix their labor with that which is unowned it becomes their property. Robert Nozick criticized this argument with his famous example of mixing tomato juice one rightfully owns with the sea. When we mix what we own with what we do not, why should we think we gain property instead of losing it? On Tully’s account, focus on the mixing metaphor misses Locke’s emphasis on what he calls the “workmanship model.” Locke believed that makers have property rights with respect to what they make just as God has property rights with respect to human beings because he is their maker. Human beings are created in the image of God and share with God, though to a much lesser extent, the ability to shape and mold the physical environment in accordance with a rational pattern or plan. Waldron has criticized this interpretation on the grounds that it would make the rights of human makers absolute in the same way that God’s right over his creation is absolute. Sreenivasan has defended Tully’s argument against Waldron’s response by claiming a distinction between creating and making. Only creating generates an absolute property right, and only God can create, but making is analogous to creating and creates an analogous, though weaker, right.
Another controversial aspect of Tully’s interpretation of Locke is his interpretation of the sufficiency condition and its implications. On his analysis, the sufficiency argument is crucial for Locke’s argument to be plausible. Since Locke begins with the assumption that the world is owned by all, individual property is only justified if it can be shown that no one is made worse off by the appropriation. In conditions where the good taken is not scarce, where there is much water or land available, an individual’s taking some portion of it does no harm to others. Where this condition is not met, those who are denied access to the good do have a legitimate objection to appropriation. According to Tully, Locke realized that as soon as land became scarce, previous rights acquired by labor no longer held since “enough and as good” was no longer available for others. Once land became scarce, property could only be legitimated by the creation of political society.
Waldron claims that, contrary to Macpherson, Tully, and others, Locke did not recognize a sufficiency condition at all. He notes that, strictly speaking, Locke makes sufficiency a sufficient rather than necessary condition when he says that labor generates a title to property “at least where there is enough, and as good left in common for others” (Two Treatises 2.27). Waldron takes Locke to be making a descriptive statement, not a normative one, about the condition that happens to have initially existed. Waldron also argues that in the text “enough and as good” is not presented as a restriction and is not grouped with other restrictions. Waldron thinks that the condition would lead Locke to the absurd conclusion that in circumstances of scarcity everyone must starve to death since no one would be able to obtain universal consent and any appropriation would make others worse off.
One of the strongest defenses of Tully’s position is presented by Sreenivasan. He argues that Locke’s repetitious use of “enough and as good” indicates that the phrase is doing some real work in the argument. In particular, it is the only way Locke can be thought to have provided some solution to the fact that the consent of all is needed to justify appropriation in the state of nature. If others are not harmed, they have no grounds to object and can be thought to consent, whereas if they are harmed, it is implausible to think of them as consenting. Sreenivasan does depart from Tully in some important respects. He takes “enough and as good” to mean “enough and as good opportunity for securing one’s preservation,” not “enough and as good of the same commodity (such as land).” This has the advantage of making Locke’s account of property less radical since it does not claim that Locke thought the point of his theory was to show that all original property rights were invalid at the point where political communities were created. The disadvantage of this interpretation, as Sreenivasan admits, is that it saddles Locke with a flawed argument. Those who merely have the opportunity to labor for others at subsistence wages no longer have the liberty that individuals had before scarcity to benefit from the full surplus of value they create. Moreover poor laborers no longer enjoy equality of access to the materials from which products can be made. Sreenivasan thinks that Locke’s theory is thus unable to solve the problem of how individuals can obtain individual property rights in what is initially owned by all people without consent.
Simmons presents a still different synthesis. He sides with Waldron and against Tully and Sreenivasan in rejecting the workmanship model. He claims that the references to “making” in chapter five of the Two Treatises are not making in the right sense of the word for the workmanship model to be correct. Locke thinks we have property in our own persons even though we do not make or create ourselves. Simmons claims that while Locke did believe that God had rights as creator, human beings have a different limited right as trustees, not as makers. Simmons bases this in part on his reading of two distinct arguments he takes Locke to make: the first justifies property based on God’s will and basic human needs, the second based on “mixing” labor. According to the former argument, at least some property rights can be justified by showing that a scheme allowing appropriation of property without consent has beneficial consequences for the preservation of mankind. This argument is overdetermined, according to Simmons, in that it can be interpreted either theologically or as a simple rule-consequentialist argument. With respect to the latter argument, Simmons takes labor not to be a substance that is literally “mixed” but rather as a purposive activity aimed at satisfying needs and conveniences of life. Like Sreenivasan, Simmons sees this as flowing from a prior right of people to secure their subsistence, but Simmons also adds a prior right to self-government. Labor can generate claims to private property because private property makes individuals more independent and able to direct their own actions. Simmons thinks Locke’s argument is ultimately flawed because he underestimated the extent to which wage labor would make the poor dependent on the rich, undermining self-government. He also joins the chorus of those who find Locke’s appeal to consent to the introduction of money inadequate to justify the very unequal property holdings that now exist.
Some authors have suggested that Locke may have had an additional concern in mind in writing the chapter on property. Tully (1993) and Barbara Arneil point out that Locke was interested in and involved in the affairs of the American colonies and that Locke’s theory of labor led to the convenient conclusion that the labor of Native Americans generated property rights only over the animals they caught, not the land on which they hunted which Locke regarded as vacant and available for the taking. Armitage even argues that there is evidence that Locke was actively involved in revising the Fundamental Constitutions of Carolina at the same time he was drafting the chapter on property for the Second Treatise. Mark Goldie, however, cautions that we should not miss the fact that political events in England were still Locke’s primary focus in writing the the Second Treatise.
A final question concerns the status of those property rights acquired in the state of nature after civil society has come into being. It seems clear that at the very least Locke allows taxation to take place by the consent of the majority rather than requiring unanimous consent (2.140). Nozick takes Locke to be a libertarian, with the government having no right to take property to use for the common good without the consent of the property owner. On his interpretation, the majority may only tax at the rate needed to allow the government to successfully protect property rights. At the other extreme, Tully thinks that, by the time government is formed, land is already scarce and so the initial holdings of the state of nature are no longer valid and thus are no constraint on governmental action. Waldron’s view is in between these, acknowledging that property rights are among the rights from the state of nature that continue to constrain the government, but seeing the legislature as having the power to interpret what natural law requires in this matter in a fairly substantial way.
The most direct reading of Locke’s political philosophy finds the concept of consent playing a central role. His analysis begins with individuals in a state of nature where they are not subject to a common legitimate authority with the power to legislate or adjudicate disputes. From this natural state of freedom and independence, Locke stresses individual consent as the mechanism by which political societies are created and individuals join those societies. While there are of course some general obligations and rights that all people have from the law of nature, special obligations come about only when we voluntarily undertake them. Locke clearly states that one can only become a full member of society by an act of express consent (Two Treatises 2.122). The literature on Locke’s theory of consent tends to focus on how Locke does or does not successfully answer the following objection: few people have actually consented to their governments so no, or almost no, governments are actually legitimate. This conclusion is problematic since it is clearly contrary to Locke’s intention.
Locke’s most obvious solution to this problem is his doctrine of tacit consent. Simply by walking along the highways of a country a person gives tacit consent to the government and agrees to obey it while living in its territory. This, Locke thinks, explains why resident aliens have an obligation to obey the laws of the state where they reside, though only while they live there. Inheriting property creates an even stronger bond, since the original owner of the property permanently put the property under the jurisdiction of the commonwealth. Children, when they accept the property of their parents, consent to the jurisdiction of the commonwealth over that property (Two Treatises 2.120). There is debate over whether the inheritance of property should be regarded as tacit or express consent. On one interpretation, by accepting the property, Locke thinks a person becomes a full member of society, which implies that he must regard this as an act of express consent. Grant suggests that Locke’s ideal would have been an explicit mechanism of society whereupon adults would give express consent and this would be a precondition of inheriting property. On the other interpretation, Locke recognized that people inheriting property did not in the process of doing so make any explicit declaration about their political obligation.
However this debate is resolved, there will be in any current or previously existing society many people who have never given express consent, and thus some version of tacit consent seems needed to explain how governments could still be legitimate. Simmons finds it difficult to see how merely walking on a street or inheriting land can be thought of as an example of a “deliberate, voluntary alienating of rights” (69). It is one thing, he argues, for a person to consent by actions rather than words; it is quite another to claim a person has consented without being aware that they have done so. To require a person to leave behind all of their property and emigrate in order to avoid giving tacit consent is to create a situation where continued residence is not a free and voluntary choice. Simmons’ approach is to agree with Locke that real consent is necessary for political obligation but disagree about whether most people in fact have given that kind of consent. Simmons claims that Locke’s arguments push toward “philosophical anarchism,” the position that most people do not have a moral obligation to obey the government, even though Locke himself would not have made this claim.
Hannah Pitkin takes a very different approach. She claims that the logic of Locke’s argument makes consent far less important in practice than it might appear. Tacit consent is indeed a watering down of the concept of consent, but Locke can do this because the basic content of what governments are to be like is set by natural law and not by consent. If consent were truly foundational in Locke’s scheme, we would discover the legitimate powers of any given government by finding out what contract the original founders signed. Pitkin, however, thinks that for Locke the form and powers of government are determined by natural law. What really matters, therefore, is not previous acts of consent but the quality of the present government, whether it corresponds to what natural law requires. Locke does not think, for example, that walking the streets or inheriting property in a tyrannical regime means we have consented to that regime. It is thus the quality of the government, not acts of actual consent, that determine whether a government is legitimate. Simmons objects to this interpretation, saying that it fails to account for the many places where Locke does indeed say a person acquires political obligations only by his own consent.
John Dunn takes a still different approach. He claims that it is anachronistic to read into Locke a modern conception of what counts as “consent.” While modern theories do insist that consent is truly consent only if it is deliberate and voluntary, Locke’s concept of consent was far more broad. For Locke, it was enough that people be “not unwilling.” Voluntary acquiescence, on Dunn’s interpretation, is all that is needed. As evidence Dunn can point to the fact that many of the instances of consent Locke uses, such as “consenting” to the use of money, make more sense on this broad interpretation. Simmons objects that this ignores the instances where Locke does talk about consent as a deliberate choice and that, in any case, it would only make Locke consistent at the price of making him unconvincing.
Recent scholarship has continued to probe these issues. Davis closely examines Locke’s terminology and argues that we must distinguish between political society and legitimate government. Only those who have expressly consented are members of political society, while the government exercises legitimate authority over various types of people who have not so consented. The government is supreme in some respects, but there is no sovereign. Van der Vossen makes a related argument, claiming that the initial consent of property owners is not the mechanism by which governments come to rule over a particular territory. Rather, Locke thinks that people (probably fathers initially) simply begin exercising political authority and people tacitly consent. This is sufficient to justify a state in ruling over those people and treaties between governments fix the territorial borders. Hoff goes still further, arguing that we need not even think of specific acts of tacit consent (such as deciding not to emigrate). Instead, consent is implied if the government itself functions in ways that show it is answerable to the people.
A related question has to do with the extent of our obligation once consent has been given. The interpretive school influenced by Strauss emphasizes the primacy of preservation. Since the duties of natural law apply only when our preservation is not threatened (2.6), then our obligations cease in cases where our preservation is directly threatened. This has important implications if we consider a soldier who is being sent on a mission where death is extremely likely. Grant points out that Locke believes a soldier who deserts from such a mission (Two Treatises 2.139) is justly sentenced to death. Grant takes Locke to be claiming not only that desertion laws are legitimate in the sense that they can be blamelessly enforced (something Hobbes would grant) but that they also imply a moral obligation on the part of the soldier to give up his life for the common good (something Hobbes would deny). According to Grant, Locke thinks that our acts of consent can in fact extend to cases where living up to our commitments will risk our lives. The decision to enter political society is a permanent one for precisely this reason: the society will have to be defended and if people can revoke their consent to help protect it when attacked, the act of consent made when entering political society would be pointless since the political community would fail at the very point where it is most needed. People make a calculated decision when they enter society, and the risk of dying in combat is part of that calculation. Grant also thinks Locke recognizes a duty based on reciprocity since others risk their lives as well.
Most of these approaches focus on Locke’s doctrine of consent as a solution to the problem of political obligation. A different approach asks what role consent plays in determining, here and now, the legitimate ends that governments can pursue. One part of this debate is captured by the debate between Seliger and Kendall, the former viewing Locke as a constitutionalist and the latter viewing him as giving almost untrammeled power to majorities. On the former interpretation, a constitution is created by the consent of the people as part of the creation of the commonwealth. On the latter interpretation, the people create a legislature which rules by majority vote. A third view, advanced by Tuckness, holds that Locke was flexible at this point and gave people considerable flexibility in constitutional drafting.
A second part of the debate focuses on ends rather than institutions. Locke states in the Two Treatises that the power of the Government is limited to the public good. It is a power that hath “no other end but preservation” and therefore cannot justify killing, enslaving, or plundering the citizens. (2.135). Libertarians like Nozick read this as stating that governments exist only to protect people from infringements on their rights. An alternate interpretation, advanced in different ways by Tuckness, draws attention to the fact that in the following sentences the formulation of natural law that Locke focuses on is a positive one, that “as much as possible” mankind is to be preserved. On this second reading, government is limited to fulfilling the purposes of natural law, but these include positive goals as well as negative rights. On this view, the power to promote the common good extends to actions designed to increase population, improve the military, strengthen the economy and infrastructure, and so on, provided these steps are indirectly useful to the goal of preserving the society. This would explain why Locke, in the Letter, describes government promotion of “arms, riches, and multitude of citizens” as the proper remedy for the danger of foreign attack (Works 6: 42)
John Locke defined political power as “a Right of making Laws with Penalties of Death, and consequently all less Penalties” (Two Treatises 2.3). Locke’s theory of punishment is thus central to his view of politics and part of what he considered innovative about his political philosophy. But he also referred to his account of punishment as a “very strange doctrine” (2.9), presumably because it ran against the assumption that only political sovereigns could punish. Locke believed that punishment requires that there be a law, and since the state of nature has the law of nature to govern it, it is permissible to describe one individual as “punishing” another in that state. Locke’s rationale is that since the fundamental law of nature is that mankind be preserved and since that law would “be in vain” with no human power to enforce it, it must therefore be legitimate for individuals to punish each other even before government exists. In arguing this, Locke was disagreeing with Samuel Pufendorf. Samuel Pufendorf had argued strongly that the concept of punishment made no sense apart from an established positive legal structure.
Locke realized that the crucial objection to allowing people to act as judges with power to punish in the state of nature was that such people would end up being judges in their own cases. Locke readily admitted that this was a serious inconvenience and a primary reason for leaving the state of nature (Two Treatises 2.13). Locke insisted on this point because it helped explain the transition into civil society. Locke thought that in the state of nature men had a liberty to engage in “innocent delights” (actions that are not a violation of any applicable laws), to seek their own preservation within the limits of natural law, and to punish violations of natural law. The power to seek one’s preservation is limited in civil society by the law and the power to punish is transferred to the government. (128–130). The power to punish in the state of nature is thus the foundation for the right of governments to use coercive force.
The situation becomes more complex, however, if we look at the principles which are to guide punishment. Rationales for punishment are often divided into those that are forward-looking and backward-looking. Forward-looking rationales include deterring crime, protecting society from dangerous persons, and rehabilitation of criminals. Backward-looking rationales normally focus on retribution, inflicting on the criminal harm comparable to the crime. Locke may seem to conflate these two rationales in passages like the following:
And thus in the State of Nature, one Man comes by a Power over another; but yet no Absolute or Arbitrary Power, to use a Criminal when he has got him in his hands, according to the passionate heats, or boundless extravagancy of his own Will, but only to retribute to him, so far as calm reason and conscience dictates, what is proportionate to his Transgression, which is so much as may serve for Reparation and Restraint. For these two are the only reasons, why one Man may lawfully do harm to another, which is that [which] we call punishment. (Two Treatises 2.8)
Locke talks both of retribution and of punishing only for reparation and restraint. Simmons argues that this is evidence that Locke is combining both rationales for punishment in his theory. A survey of other seventeenth-century natural rights justifications for punishment, however, indicates that it was common to use words like “retribute” in theories that reject what we would today call retributive punishment. In the passage quoted above, Locke is saying that the proper amount of punishment is the amount that will provide restitution to injured parties, protect the public, and deter future crime. Locke’s attitude toward punishment in his other writings on toleration, education, and religion consistently follows this path toward justifying punishment on grounds other than retribution. Tuckness claims that Locke’s emphasis on restitution is interesting because restitution is backward looking in a sense (it seeks to restore an earlier state of affairs) but also forward looking in that it provides tangible benefits to those who receive the restitution. There is a link here between Locke’s understanding of natural punishment and his understanding of legitimate state punishment. Even in the state of nature, a primary justification for punishment is that it helps further the positive goal of preserving human life and human property. The emphasis on deterrence, public safety, and restitution in punishments administered by the government mirrors this emphasis.
A second puzzle regarding punishment is the permissibility of punishing internationally. Locke describes international relations as a state of nature, and so in principle, states should have the same power to punish breaches of the natural law in the international community that individuals have in the state of nature. This would legitimize, for example, punishment of individuals for war crimes or crimes against humanity even in cases where neither the laws of the particular state nor international law authorize punishment. Thus in World War II, even if “crimes of aggression” was not at the time recognized as a crime for which individual punishment was justified, if the actions violated that natural law principle that one should not deprive another of life, liberty, or property, the guilty parties could still be liable to criminal punishment. The most common interpretation has thus been that the power to punish internationally is symmetrical with the power to punish in the state of nature.
Tuckness, however, has argued that there is an asymmetry between the two cases because Locke also talks about states being limited in the goals that they can pursue. Locke often says that the power of the government is to be used for the protection of the rights of its own citizens, not for the rights of all people everywhere (Two Treatises 1.92, 2.88, 2.95, 2.131, 2.147). Locke argues that in the state of nature a person is to use the power to punish to preserve his society, mankind as a whole. After states are formed, however, the power to punish is to be used for the benefit of his own particular society. In the state of nature, a person is not required to risk his life for another (Two Treatises 2.6) and this presumably would also mean a person is not required to punish in the state of nature when attempting to punish would risk the life of the punisher. Locke may therefore be objecting to the idea that soldiers can be compelled to risk their lives for altruistic reasons. In the state of nature, a person could refuse to attempt to punish others if doing so would risk his life and so Locke reasons that individuals may not have consented to allow the state to risk their lives for altruistic punishment of international crimes.
Locke claims that legitimate government is based on the idea of separation of powers. First and foremost of these is the legislative power. Locke describes the legislative power as supreme (Two Treatises 2.149) in having ultimate authority over “how the force for the commonwealth shall be employed” (2.143). The legislature is still bound by the law of nature and much of what it does is set down laws that further the goals of natural law and specify appropriate punishments for them (2.135). The executive power is then charged with enforcing the law as it is applied in specific cases. Interestingly, Locke’s third power is called the “federative power” and it consists of the right to act internationally according to the law of nature. Since countries are still in the state of nature with respect to each other, they must follow the dictates of natural law and can punish one another for violations of that law in order to protect the rights of their citizens.
The fact that Locke does not mention the judicial power as a separate power becomes clearer if we distinguish powers from institutions. Powers relate to functions. To have a power means that there is a function (such as making the laws or enforcing the laws) that one may legitimately perform. When Locke says that the legislative is supreme over the executive, he is not saying that parliament is supreme over the king. Locke is simply affirming that “what can give laws to another, must needs be superior to him” (Two Treatises 2.150). Moreover, Locke thinks that it is possible for multiple institutions to share the same power; for example, the legislative power in his day was shared by the House of Commons, the House of Lords, and the King. Since all three needed to agree for something to become law, all three are part of the legislative power ( 1.151). He also thinks that the federative power and the executive power are normally placed in the hands of the executive, so it is possible for the same person to exercise more than one power (or function). There is, therefore, no one to one correspondence between powers and institutions.
Locke is not opposed to having distinct institutions called courts, but he does not see interpretation as a distinct function or power. For Locke, legislation is primarily about announcing a general rule stipulating what types of actions should receive what types of punishments. The executive power is the power to make the judgments necessary to apply those rules to specific cases and administer force as directed by the rule (Two Treatises 2.88–89). Both of these actions involve interpretation. Locke states that positive laws “are only so far right, as they are founded on the Law of Nature, by which they are to be regulated and interpreted” (2.12). In other words, the executive must interpret the laws in light of its understanding of natural law. Similarly, legislation involves making the laws of nature more specific and determining how to apply them to particular circumstances ( 2.135) which also calls for interpreting natural law. Locke did not think of interpreting law as a distinct function because he thought it was a part of both the legislative and executive functions (Tuckness 2002a).
If we compare Locke’s formulation of separation of powers to the later ideas of Montesquieu, we see that they are not so different as they may initially appear. Although Montesquieu gives the more well known division of legislative, executive, and judicial, as he explains what he means by these terms he reaffirms the superiority of the legislative power and describes the executive power as having to do with international affairs (Locke’s federative power) and the judicial power as concerned with the domestic execution of the laws (Locke’s executive power). It is more the terminology than the concepts that have changed. Locke considered arresting a person, trying a person, and punishing a person as all part of the function of executing the law rather than as a distinct function.
Locke believed that it was important that the legislative power contain an assembly of elected representatives, but as we have seen the legislative power could contain monarchical and aristocratic elements as well. Locke believed the people had the freedom to created “mixed” constitutions that utilize all of these. For that reason, Locke’s theory of separation of powers does not dictate one particular type of constitution and does not preclude unelected officials from having part of the legislative power. Locke was more concerned that the people have representatives with sufficient power to block attacks on their liberty and attempts to tax them without justification. This is important because Locke also affirms that the community remains the real supreme power throughout. The people retain the right to “remove or alter” the legislative power (Two Treatises 2.149). This can happen for a variety of reasons. The entire society can be dissolved by a successful foreign invasion (2.211), but Locke is more interested in describing the occasions when the people take power back from the government to which they have entrusted it. If the rule of law is ignored, if the representatives of the people are prevented from assembling, if the mechanisms of election are altered without popular consent, or if the people are handed over to a foreign power, then they can take back their original authority and overthrow the government (2.212–17). They can also rebel if the government attempts to take away their rights (2.222). Locke thinks this is justifiable since oppressed people will likely rebel anyway and those who are not oppressed will be unlikely to rebel. Moreover, the threat of possible rebellion makes tyranny less likely to start with (2.224–6). For all these reasons, while there are a variety of legitimate constitutional forms, the delegation of power under any constitution is understood to be conditional.
Locke’s understanding of separation of powers is complicated by the doctrine of prerogative. Prerogative is the right of the executive to act without explicit authorization for a law, or even contrary to the law, in order to better fulfill the laws that seek the preservation of human life. A king might, for example, order that a house be torn down in order to stop a fire from spreading throughout a city (Two Treatises 1.159). Locke defines it more broadly as “the power of doing public good without a rule” (1.167). This poses a challenge to Locke’s doctrine of legislative supremacy. Locke handles this by explaining that the rationale for this power is that general rules cannot cover all possible cases and that inflexible adherence to the rules would be detrimental to the public good and that the legislature is not always in session to render a judgment (2.160). The relationship between the executive and the legislature depends on the specific constitution. If the chief executive has no part in the supreme legislative power, then the legislature could overrule the executive’s decisions based on prerogative when it reconvenes. If, however, the chief executive has a veto, the result would be a stalemate between them. Locke describes a similar stalemate in the case where the chief executive has the power to call parliament and can thus prevent it from meeting by refusing to call it into session. In such a case, Locke says, there is no judge on earth between them as to whether the executive has misused prerogative and both sides have the right to “appeal to heaven” in the same way that the people can appeal to heaven against a tyrannical government (2.168).
The concept of an “appeal to heaven” is an important concept in Locke’s thought. Locke assumes that people, when they leave the state of nature, create a government with some sort of constitution that specifies which entities are entitled to exercise which powers. Locke also assumes that these powers will be used to protect the rights of the people and to promote the public good. In cases where there is a dispute between the people and the government about whether the government is fulfilling its obligations, there is no higher human authority to which one can appeal. The only appeal left, for Locke, is the appeal to God. The “appeal to heaven,” therefore, involves taking up arms against your opponent and letting God judge who is in the right.
In Locke’s Letter Concerning Toleration, he develops several lines of arguments that are intended to establish the proper spheres for religion and politics. His central claims are that government should not use force to try to bring people to the true religion and that religious societies are voluntary organizations that have no right to use coercive power over their own members or those outside their group. One recurring line of argument that Locke uses is explicitly religious. Locke argues that neither the example of Jesus nor the teaching of the New Testament gives any indication that force is a proper way to bring people to salvation. He also frequently points out what he takes to be clear evidence of hypocrisy, namely that those who are so quick to persecute others for small differences in worship or doctrine are relatively unconcerned with much more obvious moral sins that pose an even greater threat to their eternal state.
In addition to these and similar religious arguments, Locke gives three reasons that are more philosophical in nature for barring governments from using force to encourage people to adopt religious beliefs (Works 6:10–12). First, he argues that the care of men’s souls has not been committed to the magistrate by either God or the consent of men. This argument resonates with the structure of argument used so often in the Two Treatises to establish the natural freedom and equality of mankind. There is no command in the Bible telling magistrates to bring people to the true faith and people could not consent to such a goal for government because it is not possible for people, at will, to believe what the magistrate tells them to believe. Their beliefs are a function of what they think is true, not what they will. Locke’s second argument is that since the power of the government is only force, while true religion consists of genuine inward persuasion of the mind, force is incapable of bringing people to the true religion. Locke’s third argument is that even if the magistrate could change people’s minds, a situation where everyone accepted the magistrate’s religion would not bring more people to the true religion. Many of the magistrates of the world believe religions that are false.
Locke’s contemporary, Jonas Proast, responded by saying that Locke’s three arguments really amount to just two, that true faith cannot be forced and that we have no more reason to think that we are right than anyone else has. Proast argued that force can be helpful in bringing people to the truth “indirectly, and at a distance.” His idea was that although force cannot directly bring about a change of mind or heart, it can cause people to consider arguments that they would otherwise ignore or prevent them from hearing or reading things that would lead them astray. If force is indirectly useful in bringing people to the true faith, then Locke has not provided a persuasive argument. As for Locke’s argument about the harm of a magistrate whose religion is false using force to promote it, Proast claimed that this was irrelevant since there is a morally relevant difference between affirming that the magistrate may promote the religion he thinks true and affirming that he may promote the religion that actually is true. Proast thought that unless one was a complete skeptic, one must believe that the reasons for one’s own position are objectively better than those for other positions.
Jeremy Waldron, in an influential article, restated the substance of Proast’s objection for a contemporary audience. He argued that, leaving aside Locke’s Christian arguments, his main position was that it was instrumentally irrational, from the perspective of the persecutor, to use force in matters of religion because force acts only on the will and belief is not something that we change at will. Waldron pointed out that this argument blocks only one particular reason for persecution, not all reasons. Thus it would not stop someone who used religious persecution for some end other than religious conversion, such as preserving the peace. Even in cases where persecution does have a religious goal, Waldron agrees with Proast that force may be indirectly effective in changing people’s beliefs. Much of the current discussion about Locke’s contribution to contemporary political philosophy in the area of toleration centers on whether Locke has a good reply to these objections from Proast and Waldron.
Some contemporary commentators try to rescue Locke’s argument by redefining the religious goal that the magistrate is presumed to seek. Susan Mendus, for example, notes that successful brainwashing might cause a person to sincerely utter a set of beliefs, but that those beliefs might still not count as genuine. Beliefs induced by coercion might be similarly problematic. Paul Bou Habib argues that what Locke is really after is sincere inquiry and that Locke thinks inquiry undertaken only because of duress is necessarily insincere. These approaches thus try to save Locke’s argument by showing that force really is incapable of bringing about the desired religious goal.
Other commentators focus on Locke’s first argument about proper authority, and particularly on the idea that authorization must be by consent. David Wootton argues that even if force occasionally works at changing a person’s belief, it does not work often enough to make it rational for persons to consent to the government exercising that power. A person who has good reason to think he will not change his beliefs even when persecuted has good reason to prevent the persecution scenario from ever happening. Richard Vernon argues that we want not only to hold right beliefs, but also to hold them for the right reasons. Since the balance of reasons rather than the balance of force should determine our beliefs, we would not consent to a system in which irrelevant reasons for belief might influence us.
Other commentators focus on the third argument, that the magistrate might be wrong. Here the question is whether Locke’s argument is question begging or not. The two most promising lines of argument are the following. Wootton argues that there are very good reasons, from the standpoint of a given individual, for thinking that governments will be wrong about which religion is true. Governments are motivated by the quest for power, not truth, and are unlikely to be good guides in religious matters. Since there are so many different religions held by rulers, if only one is true then likely my own ruler’s views are not true. Wootton thus takes Locke to be showing that it is irrational, from the perspective of the individual, to consent to government promotion of religion. A different interpretation of the third argument is presented by Tuckness. He argues that the likelihood that the magistrate may be wrong generates a principle of toleration based on what is rational from the perspective of a legislator, not the perspective of an individual citizen. Drawing on Locke’s later writings on toleration, he argues that Locke’s theory of natural law assumes that God, as author of natural law, takes into account the fallibility of those magistrates who will carry out the commands of natural law. If “use force to promote the true religion” were a command of natural law addressed to all magistrates, it would not promote the true religion in practice because so many magistrates wrongly believe that their religion is the true one. Tuckness claims that in Locke’s later writings on toleration he moved away from arguments based on what it is instrumentally rational for an individual to consent to. Instead, he emphasized testing proposed principles based on whether they would still fulfill their goal if universally applied by fallible human beings.
- Filmer, Robert, Patriarcha and Other Writings, Johann P. Sommerville (ed.), Cambridge: Cambridge University Press, 1991.
- Hooker, Richard, 1594, Of the Laws of Ecclesiastical Polity, A. S. McGrade (ed.), Cambridge: Cambridge University Press, 1975.
- Locke, John, Works, 10 volumes, London, 1823; reprinted, Aalen: Scientia Verlag, 1963.
- –––, 1690, An Essay Concerning Human Understanding, Peter H. Nidditch (ed.), Oxford: Clarendon Press, 1975.
- –––, 1689, Letter Concerning Toleration, James Tully (ed.), Indianapolis: Hackett Publishing Company, 1983.
- –––, 1689, Two Treatises of Government, P. Laslett (ed.), Cambridge: Cambridge University Press, 1988.
- –––, 1693, Some Thoughts Concerning Education; and On the Conduct of the Understanding, Ruth Grant and Nathan Tarcov (eds.), Indianapolis: Hackett, 1996.
- –––, Political Essays, Mark Goldie (ed.), Cambridge: Cambridge University Press, 1997.
- –––, An Essay Concerning Toleration and Other Writings on Law and Politics, 1667–1683, J.R. Milton and Phillip Milton (eds.), Oxford: Clarendon Press, 2006.
- Montesquieu, 1748, The Spirit of the Laws, Anne Cohler, Basia Miller, and Harold Stone (trans. and eds.), Cambridge: Cambridge University Press, 1989.
- Proast, Jonas, 1690, The Argument of the Letter concerning Toleration Briefly Consider’d and Answered, in The Reception of Locke’s Politics, vol. 5, Mark Goldie (ed.), London: Pickering & Chatto, 1999.
- –––, 1691, A Third Letter to the Author of …, in The Reception of Locke’s Politics, vol. 5, Mark Goldie (ed.), London: Pickering & Chatto, 1999.
- Pufendorf, Samuel, 1672, De Jure Naturae et Gentium (Volume 2), Oxford: Clarendon Press, 1934.
- Aaron, Richard, 1937, John Locke, Oxford: Oxford University Press.
- Armitage, David, 2004, “John Locke, Carolina, and the Two Treatises of Government”, Political Theory, 32: 602–627.
- Arneil, Barbara, 1996, John Locke and America, Oxford: Clarendon Press.
- Ashcraft, Richard, 1986, Revolutionary Politics and Locke’s Two Treatises of Government, Princeton: Princeton University Press.
- Ashcraft, Richard, 1987,Locke’s Two Treatises of Government, London: Unwin Hymen Ltd.
- Butler, M.A. “Early Liberal Roots of Feminism: John Locke and the Attack on Patriarchy”, American Political Science Review, 72: 135–150.
- Casson, Douglas, 2011, Liberating Judgment: Fanatics, Skeptics, and John Locke’s Politics of Probability, Princeton: Princeton University Press.
- Chappell, Vere, 1994, The Cambridge Companion to Locke, Cambridge: Cambridge University Press.
- Creppell, Ingrid, 1996, “Locke on Toleration: The Transformation of Constraint”, Political Theory, 24: 200–240.
- Colman, John, 1983, John Locke’s Moral Philosophy, Edinburgh: Edinburgh University Press.
- Cranston, Maurice, 1957, John Locke, A Biography, London: Longmans, Green.
- Davis, Michael, 2014, “Locke’s Political Society: Some Problems of Terminology in Two Treatises of Government”, Journal of Moral Philosophy, 11: 209–231.
- Dunn, John, 1969, The Political Thought of John Locke, Cambridge: Cambridge University Press.
- –––, 1980, “Consent in the Political Theory of John Locke”, in Political Obligation in its Historical Context, Cambridge: Cambridge University Press.
- –––, 1990, “What Is Living and What Is Dead in the Political Theory of John Locke?”, in Interpreting Political Responsibility, Princeton: Princeton University Press.
- –––, 1991, “The Claim to Freedom of Conscience: Freedom of Speech, Freedom of Thought, Freedom of Worship?”, in From Persecution to Toleration: the Glorious Revolution and Religion in England, Ole Peter Grell, Jonathan Israel, and Nicholas Tyacke (eds.), Oxford: Clarendon Press.
- Farr, J., 2008, “Locke, Natural Law, and New World Slavery”, Political Theory, 36: 495–522.
- Franklin, Julian, 1978, John Locke and the Theory of Sovereignty, Cambridge: Cambridge University Press.
- Forde, Steven, 2001, “Natural Law, Theology, and Morality in Locke”, American Journal of Political Science, 45: 396–409.
- –––, 2011, “‘Mixed Modes’ in John Locke’s Moral and Political Philosophy”, Review of Politics, 73: 581–608.
- Forster, Greg, 2005, John Locke’s Politics of Moral Consensus, Cambridge: Cambridge University Press.
- Goldie, Mark, 1983, “John Locke and Anglican Royalism”, Political Studies, 31: 61–85.
- –––, 2015, “Locke and America”, in A Companion to Locke, ed. Matthew Stuart, London: Wiley Blackwell.
- Grant, Ruth, 1987, John Locke’s Liberalism: A Study of Political Thought in its Intellectual Setting, Chicago: University of Chicago Press.
- –––, 2012, “John Locke on Custom’s Power and Reason’s Authority”, Review of Politics, 74: 607–629.
- Hoff, Shannon, 2015, “Locke and the Nature of Political Authority”, Review of Politics, 77: 1–22.
- Harris, Ian, 1994, The Mind of John Locke, Cambridge: Cambridge University Press.
- Hirschmann, Nancy J and Kirstie Morna McClure (eds.), 2007, Feminist Interpretations of John Locke, University Park, PA: Penn State University Press.
- Macpherson, C.B., 1962, The Political Theory of Possessive Individualism: Hobbes to Locke, Oxford: Clarendon Press.
- Marshall, John, 1994, John Locke: Resistance, Religion, and Responsibility, Cambridge: Cambridge University Press.
- Marshall, John, 2006, John Locke, Toleration, and Early Enlightenment Culture, Cambridge: Cambridge University Press.
- Numao, J.K., 2013, “Locke on Atheism”, History of Political Thought, 34: 252–272.
- Herzog, Don, 1985, Without Foundations, Ithaca: Cornell University Press.
- Horton, John and Susan Mendus (eds.), 1991, John Locke: A Letter Concerning Toleration in Focus, New York: Routledge.
- Kendall, Willmoore, 1959, John Locke and the Doctrine of Majority Rule, Urbana: University of Illinois Press.
- Nozick, Robert, 1974. Anarchy, State, and Utopia, New York: Basic Books.
- Pangle, Thomas, 1988, The Spirit of Modern Republicanism, Chicago: University of Chicago Press.
- Parker, Kim Ian. 2004, The Biblical Politics of John Locke, Waterloo, ON: Wilfrid Laurier University Press.
- Pasquino, Pasquale, 1998, “Locke on King’s Prerogative”, Political Theory, 26: 198–208.
- Pitkin, Hanna, 1965, “Obligation and Consent I”, American Political Science Review, 59: 991–999.
- Roover, Jakob De and S. N. Balagangadhara, 2008, “ John Locke, Christian Liberty, and the Predicament of Liberal Toleration”, Political Theory, 36: 523–549.
- Ryan, Alan, 1965, “John Locke and the Dictatorship of the Proletariat”, Political Studies, 13: 219–230.
- Seagrave, Adam, 2014, The Foundations of Natural Morality: On the Compatibility of Natural Law and Natural Right, Chicago: University of Chicago Press.
- Seliger, Martin, 1968, The Liberal Politics of John Locke, London: Allen & Unwin.
- Simmons, A. John, 1992, The Lockean Theory of Rights, Princeton: Princeton University Press.
- –––, 1993, On The Edge of Anarchy: Locke, Consent, and the Limits of Society, Princeton: Princeton University Press.
- Sreenivasan, Gopal, 1995, The Limits of Lockean Rights in Property, Oxford: Oxford University Press.
- Stanton, Timothy, 2011, “Authority and Freedom in Locke”, Political Theory, 39: 6–30.
- Strauss, Leo, 1953, Natural Right and History, Chicago: University of Chicago Press.
- Tarcov, Nathan, 1984, Locke’s Education for Liberty, Chicago: University of Chicago Press.
- Tate, John William, 2013, “‘We Cannot Give One Millimetre’? Liberalism, Enlightenment, and Diversity”, Political Studies, 61: 816–833.
- Tierney, Brian, 2014, Liberty and Law:Permissive Natural Law, 1100–1800, Washington, DC: Catholic University of America Press.
- Tuckness, Alex, 1999, “The Coherence of a Mind: John Locke and the Law of Nature”, Journal of the History of Philosophy, 37: 73–90.
- –––, 2002a, Locke and the Legislative Point of View: Toleration, Contested Principles, and Law, Princeton: Princeton University Press.
- –––, 2002b, “Rethinking the Intolerant Locke”, American Journal of Political Science, 46: 288–298.
- –––, 2008, “Punishment, Property, and the Limits of Altruism: Locke’s International Asymmetry”, American Political Science Review, 208: 467–480.
- –––, 2010, “Retribution and Restitution in Locke’s Theory of Punishment”, Journal of Politics, 72: 720–732.
- Tully, James, 1980, A Discourse on Property, John Locke and his adversaries, Cambridge: Cambridge University Press.
- –––, 1993, An Approach to Political Philosophy: Locke in Contexts, Cambridge: Cambridge University Press.
- Tunick, Mark, 2014, “John Locke and the Right to Bear Arms”, History of Political Thought, 35: 50–69.
- Udi, Juliana, 2015, “Locke and the Fundamental Right to Preservation: on the Convergence of Charity and Property Rights”, Review of Politics, 77: 191–215.
- Van der Vossen, Bas, 2015, “Locke on Territorial Rights”, Political Studies, 63: 713–728.
- Vernon, Richard, 1997, The Career of Toleration: John Locke, Jonas Proast, and After, Montreal and Kingston: McGill-Queens University Press.
- –––, 2013, “Lockean Toleration: Dialogical Not Theological”, Political Studies, 61: 215–230.
- Waldron, Jeremy, 1988, The Right to Private Property, Oxford: Clarendon Press.
- –––, 1993, “Locke, Toleration, and the Rationality of Persecution” in Liberal Rights: Collected Papers 1981–1991, Cambridge: Cambridge University Press, pp. 88–114.
- –––, 2002, God, Locke, and Equality: Christian Foundations of Locke’s Political Thought, Cambridge: Cambridge University Press.
- Wolfson, Adam, 2010, Persecution or Toleration: An Explication of the Locke-Proast Quarrel, Plymouth: Lexington Books.
- Wood, Neal, 1983, The Politics of Locke’s Philosophy, Berkeley, University of California Press.
- –––, 1984, John Locke and Agrarian Capitalism, Berkeley, University of California Press.
- Woolhouse, R.S., 2007, John Locke: A Biography, Cambridge: Cambridge University Press.
- Wootton, David, 1993, “Introduction” to Political Writings by John Locke, London: Penguin Books.
- Yolton, John, 1958, “Locke on the Law of Nature”, Philosophical Review, 67: 477–498.
- –––, 1969, John Locke: Problems and Perspectives, Cambridge: Cambridge University Press.
- Zukert, Michael, 1994, Natural Rights and the New Republicanism, Princeton: Princeton University Press.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- The Works of John Locke, 1824 edition; several volumes, including the Essay Concerning Human Understanding, Two Treatises of Government, all four Letters on Toleration, and his writings on money.
- John Locke’s Political Philosophy, entry by Alexander Moseley, in the Internet Encyclopedia of Philosophy
- John Locke Bibliography, maintained by John Attig (Pennsylvania State University).
- Images of Locke, at the National Portrait Gallery, Great Britain.
|
Cultural Anthropology/Play, Sport and Arts
Children and even adults train their bodies and brains for real life situation through playing. Through the act of playing, children acquire and learn many new skills which contribute to their growth and development, such as cooperation, decision-making, as well as improved ability to both think and act more creatively. According to a report by Kenneth R. Ginsburg, “play is important to healthy brain development.” . Patterns and connections made between nerve cells and neurons in the brain are stimulated and influenced by the activities children engage in, such as play. Children should be encouraged to play because it can be extremely constructive to the overall development of their brains, as well as effective in forming new connections in their brains. This important development influences, “fine and gross motor skills, language, socialization, personal awareness, emotional well-being, creativity, problem-solving and learning ability,” which are all key building blocks for children’s futures as they develop. Therefore, it is encouraged for children to play, and continue to play throughout their lives.
Playing also prompts children to use their brains in creative and imaginative ways. This not only develops and strengthens connections in their brains but also allows them to experience many different aspects of the world that they may not otherwise be able to experience. These “other-worldly” experiences so to speak, can be accomplished through children’s creative and imaginative processes where they often create fictitious or “make-believe” worlds in games. These games allow children to play and think creatively together. Psychologist Dr. Sandra Shiner says this about fantasy games: “we should encourage this in our children because creative thinkers must first fantasize about ideas before they can make these ideas reality."
Games that children have created usually have sets of rules that the players are expected to follow. These types of rule-making collaborations through play not only teach children how to logically come up with ideas and rules, but also teaches them how to interact with each other, communicate, and understand how to socialize and work in a group. Studies have also shown that, "while in free play children tended to sort themselves into groupings by sex and color". For many years, most anthropologists paid little attention to the significance of human play. It wasn't until recently that modern anthropologists realized the human play was an important factor and was necessary to be studied because of its massive impact on human behavior. The act of playing is now viewed by many in the field of anthropology as a universal practice and one that is significant to the understanding of human cultures.
Play Among Children in the United States
Play is demonstrated and encouraged in the United States preschool system. In the U.S., it is common for parents to send their children to preschools, where they interact with other kids of the same age, and learn important social skills. Parents are encouraged to send their children to preschool so that they can learn ways of play and interaction that will be important skills as they grow older and begin to integrate into society. Preschool and the idea of play in this context is beneficial to young children because it teaches the life skill of sharing, as well as many others like friendship, patience, and acceptance of others. Not only does preschool teach necessary life skills to children, it can also be good for their health. For example, children with special needs can go to preschool for therapeutic benefits, like the development of fine-motor skills, relationship practice, creative thinking, and above all an opportunity for fun. Many schools devoted to special needs children utilize a technique called floor-time, which at its core, is play-time. This one-on-one play time with an adult is a great way for special needs children to explore specific areas of interest and develop a sense of self-worth they otherwise may not have been exposed to.
Gender Differences in Social Play in Early Childhood
Gender differences within child's play are not consistent over time. Studies focusing on children in preschool found that girls typically develop social and structured forms of play at a younger age than boys, however, males displayed more solitary play. "During solitary play, children (ranging from ages three to 18 months) are very busy with play and they may not seem to notice other children sitting or playing nearby. They are exploring their world by watching, grabbing and rattling objects, and often spend much of their time playing on their own. Solitary play begins in infancy and is common in toddlers. This is because of toddlers’ limited social, cognitive, and physical skills. However, it is important for all age groups to have some time to play by themselves". Males typically catch up to females at the next developmental stage when associated and cooperative play is the primary focus. There are a number of reasons female children have an advantage when it comes to social play. Play involves communication, role taking, and cooperation. Socio-cognitive skills, such as language and theory-of-mind, are acquired at an earlier age for females. Within the first year, females show stronger social orientation responses and facial recognition, and more eye contact. These skills translate to social competence with peers. Another reason females may appear to have a higher quality of play may be related to gendered toys. A study showed that both male and female children had the greatest play complexity when they played with toys that were stereotypical female toys, compared to when they played with neutral or male stereotyped toys.
Activities in Adulthood
Throughout childhood, a play is essential for children’s enculturation. As humans mature into adults, the idea of playing seems to fade. Leisure activities of intrinsic value are vital for both physical and mental health, attaining a sense of fulfillment in life, and for overall happiness. The importance of play and leisure are constantly overlooked when combating stress. Stress has been shown to have negative effects on areas ranging from national health to the economy. A Canadian study estimated that 12 billion dollars are lost every year due to stress and 43% of Americans report suffering from a job-related burnout. These problems are often attributed to the lack of vacation time in America, or in other words, a lack of leisure and play.
When adults are given the time to engage in activities of play such as sports, hobbies, dancing, or various other recreational activities there are distinct benefits to their quality of life. In Jim Rice’s article, “Why Play”, he writes about how adults often feel like victims of time, brought down with obligations to spend all of our time productively. What most adults don’t realize is that play and leisure are productive in the sense that they are important for overall wellbeing and reduce stress which in the long run increases productivity in other areas. Some ways adults can play is by doing activities outside like hiking or boating, interacting with friends, or going out for drinks and dancing.
A sport is a type of play that is governed by a set of rules. In most cases it is considered to be physically exertive and competitive. In almost all forms of sport, the competition determines a winner and a loser. Physical exertion can vary dramatically across sports like golf versus football. Sports tend to contain both play, work and leisure. Less physically exertive forms of sports tend to constitute play, while more exertive and athletically demanding sports often serve as work for athletes and owners of sports teams. However, sports are generally defined by conflict where the goal is always for one opponent or team to win. In some culture, conflict-resolution is often the goal. This type of play, because it is defined by set rules, creates a virtual world where participants can create heroes, enemies, suffer and celebrate, all without real-world consequence. Athletes and teams exist not only to oppose each other, but to represent themselves as players and their team.
Sport in Culture
Sports hold a variety of different meanings across cultures. Soccer originated in Europe and has been around for thousands of years. Some of the earliest forms have been documented as an after war ritual where instead of a ball they would use the head of an enemy. In a study of soccer in Brazil, Dr. Janet Lever finds that organized sports aid political unity and allegiance to the nation-state. In Brazil, every city is home to at least one professional soccer team. Interestingly, different teams tend to represent different culture groups, such as different economic levels and ethnicities. This creates allegiances at a local level, but the team that represents a city in the national championships will have the support of all the people of that city, thus building political unity on a greater level. Having this firm support for the representation of teams gives people something to identify with. Their support for their team can be taken as giving support to their nation.This is even more so in World Cup championships when the entire country of Brazil units to support their country's official team. Brazilians fans like to boast about 'Penta' since they are the only country to win the World Cup five times; 1958, 1962, 1970, 1994 and 2002. Soccer unifies the country of Brazil, but it is important to note that sports do not always create unity. Sports bring out an aggressive and competitive side in all athletes. They also highlight inequalities, such as gender segregation between men and women. Brazilian women are far less interested in soccer, and as a result, remain separate from men in that aspect. (The last statement may not be entirely accurate as a large number of Brazilian women are some of the most passionate soccer fans in the world. Also, Brazil women's national team is the most successful club in the sport.)
In the Republic of Serbia, it is thought that playing soccer enhances qualities. These aspects include aggressiveness, competition, physical strength, coordination, teamwork, discipline, and speed. These are all qualities attributed to the male gender. It is a common practice for men to watch games together in their homes, in front of local stores, etc. Women are not welcomed at these gatherings and are often asked to leave before the game starts or asked not to come until the game is over. This male dominated aspect of Serbian culture parallels the gender segregation between men and women found in Brazil. Another inequality that soccer highlights are the difference between the upper-class society and lower class society. Soccer was especially practiced by the poor throughout the 20th century. Many poor boys are dreaming of becoming the next Pele or Ronaldo and because of this, they promote the national soccer culture even more. Dreaming about soccer is a motivation for millions of poor children who want to escape from their poverty. In Brazilian life, it's not uncommon for soccer culture to have a bigger influence than politics or economics.
American Football has many widely televised games that draw a large audience every year. These games include the Super Bowl by drawing in millions of television viewers each year in early February, and college football's multiple BCS (Bowl Championship Series) bowl games that occur around and on New Years Day.
The National Football League (NFL) is the organization where there are 32 professional teams all around the United States. The NFL is becoming more popular globally. In the 2008-09 NFL season, the New Orleans Saints and the San Diego Chargers played regular season game in London and has progressed into at least one internationally located game each year since. This was done to help make the NFL more global and expand the culture of the game. Football is a violent game, with hits at the professional level often characterized by two outstanding athletes running at full speed into one another with the sense of danger neutralized by the pads and helmet they wear for protection. The aggressive nature of football is a major contributor to its popularity, with toughness and perseverance as its chief virtues. However, scientific research revealing the health issues suffered by players later in life, including CTE and Dementia, has lead to concern about whether the negative impact of playing the game out ways the positives.
This universal sport has been the center of cultural life in the Dominican Republic, connecting Dominicans to one another, as well as connecting them to the rest of the Caribbean for over 100 years. This small Caribbean island has been the home to many of the best players in Major League Baseball in the United States, where the major league is run and the world series is played. Major League Heroes such as Sammy Sosa, Pedro Martinez, and David Ortiz all excelled in this sport in the Dominican Republic in order to reach their ultimate goal of playing professionally in the United States. Since the Dominican Republic is an economically poor country,although David Ortiz and other players return to help promote those kids to help them live their dreams and show that they can use baseball to see other cultures while playing the game they love. little boys and teenagers alike work their entire lives to try to be the best baseball player that they can be. This constant competition is a great source of entertainment, which is why baseball games are a huge part of Dominican culture. Most women are forbidden to partake in this sport. This rule is not so much sexism as it is an attempt to keep women safe, as most Dominicans believe that baseball is a dangerous sport for women because of the hard ball that can be hit anywhere at any given moment. Although it is not a law that women cannot play baseball, they traditionally do not partake in this cultural pastime. For women, they created a sport called Softball,similar to baseball but with a bigger and softer ball. For men in the Dominican Republic, Baseball is not only a great hobby and way to relate to each other, it is also an opportunity to strive to become the best athletes they can possibly become. Baseball has been a great part of America and has help shape sports from history. As Asia first started to play the game of baseball, America came and took over a revolution.
While symbols and language are used in a wide variety of sports, they are absolutely essential to the game of baseball. In a full nine inning game of baseball, there is almost never a moment of complete silence on the field. In American culture, certain gestures and hand motions are used by the 3rd base coach to communicate a specific action for the batter to perform (Swing, bunt, take, etc.). Hand gestures and voice commands are used by players on the field to communicate position changes, the number of outs there are, and tips about where the batter typically hits the ball. The most important use of symbols and hand gestures in baseball comes from the catcher and are directed towards the pitcher. These gestures are an essential aspect of the game because they tell the pitcher what pitch he is throwing next (Curveball, fastball, slider, etc). Commands in baseball come from different members of the team (third base coach, first base coach, head coach, players, etc.) depending on the culture and the country the game is being played in. For example, in American culture hitting signs come from the third base coach, and catching signs come from the head coach. Just as baseball can not be played without a ball or a bat, it can not be played without the use of communication, symbols, and gestures. In addition, baseball is mainstream sport in the United States dissimilar from the others such as American football or soccer, as baseball is played without a timer. This allows players to showcase their skill without having to worry about time management, making for tense displays of skill.
Equally popular in the United States is basketball, which has a growing global following as well. Basketball is played with five players on each team with the main goal being scoring points by successfully throwing the ball through a hoop. Basketball is played widely throughout the United States and is popular with both men and women. It is also one of the most popular and widely viewed sports in the world.
The battle for equality of woman's sports has been an ongoing struggle for many years. The WNBA wasn't started until 1997, but with stars such as Sheryl Swoopes, Cynthia Cooper, Lisa Leslie, Diana Taurasi, and Candace Parker made for a rise in popularity. Sheryl Swoopes and Cynthia Cooper lead the Houston Comets to wins in the first four WNBA championships and were the first WNBA dynasty. The WNBA has become so popular that it's viewers had topped that of both the NHL and MLB. Title IX helped make a huge impact on the WNBA because it helped out college basketball players, allowing to give them scholarships. Besides the United States, basketball is also extremely popular in many other countries. Basketball has been a huge part in the globalization of nations. The United States has had the largest impact on globalization within the basketball world because it has the largest and most popular professional leagues. The NBA is the largest professional basketball league in the U.S and makes a continuous effort to interact with basketball leagues around the world. This year the NBA hopes to continue its globalization efforts by having 12 teams set to play 10 games in 10 different international cities. The NBA hopes to influence more international cities to form basketball teams and leagues with these games being played in their countries and cities.
A sport that has old roots in combat, boxing is prevalent in most parts of the world, including the Americas, Europe and Asia. The origins of boxing are prehistoric, and the sport has evolved over many years with waxing and waning popularity. Within the United States, there are currently four major sanctioning bodies for the sport of boxing: the World Boxing Organization (WBO), International Boxing Organization (IBF), World Boxing Association (WBA), and the World Boxing Council (WBC). The popularity of boxing varies across countries due to its ties with the culture of that area. Examples of countries with a strong cultural connection to boxing include Mexico, Russia, and the United States, with a good majority of famous champions coming from these regions. However, many famous stars from the sport of professional boxing managed to become international icons across cultures, such as Mike Tyson, Muhammad Ali, and Roberto Duran to name a few. While the viewership of professional boxing has dwindled since the 2000s, amateur boxing is a sport that still remains very popular across cultures, seeing as it is an Olympic sport.
Gaming and eSports
Not traditionally seen as a sport due to it's lack of physical exertion, video games are becoming increasingly popular in the mainstream. Known as eSports, they are a form of competition that is facilitated by on online device, usually played in the comfort of a home, but recently they are being hosted in arenas. Most commonly, eSports take the form of organized, multiplayer video game competitions, particularly between professional players. eSports even have major events and competitions in developed countries, in which these gamers meet to contest their abilities against one another to win a cash prize. eSports have become most popular in The Americas, Asia, Europe, and most notably in South Korea. eSport prizes can exceed 10,000 for team play, and individual play around 6,000 dollars. Many of these tournaments are even covered by sport networks such as ESPN. . Alongside cash rewards, players and teams are sponsored by companies in the same manner as Nascar Drivers, Pro Snowboarders and American Soccer Teams by companies such as Razer, Red Bull, Logitech, Geico, and Monster energy. Games such as Counterstrike, League of Legends, DoTa 2, Overwatch, PUBG, and Starcraft are the leading games played. More games are catching onto the competitive scene such as fighting games like Smash and Tekken.
Specifically within the scene of "Multiplayer Online Battle Arena" type games (MOBAs) such as League of Legends and DoTA 2 have lent themselves to practices similar to a traditional image of sports. The 2016 International tournament had a prize pool of $20,770,640.1. The same year, the League of Legends World Tour was tuned in by more viewers than the Super Bowl of the same year. Communities around these types of games have coined the term 'E-Sports' (electronic sports), and have earned rights with the US government to grant sports visas to professional players. It is commonplace for players within MOBA communities to self identify their game as an E-sport, however, this opinion is not shared with the general public.
This is an industry however, with its fair share of hardships. In South Korea, it is not uncommon for aspiring professional gamer to train fourteen to sixteen hour days, and the handful of people who are successful earn substantially lower salaries than the national average.In Korea, top professional players can make $35000-$40000 a year, however, players on smaller teams average between $9000 and $10000 a year. That being said, it is not uncommon for many eSport players to flourish and have long, successful careers whether it is a team or individual. Additionally, pro players typically have long careers depending on age group and game type. .
Positive Effects of Getting Involved in Sports
Becoming involved with sports is beneficial in numerous ways. It promotes a healthy lifestyle, team building opportunities, strength, perseverance, leadership, and discipline. It can also increase confidence on and off the field. These are all important characteristics that will help children grow into independent, driven individuals.There has been researching behind the theory that teenage girls specifically that are involved in sports may lead safer and more productive lifestyles.
It has been proven that athletes get better grades and perform better on standardized tests. For example, swimming is one of the top academic performing sports along with tennis and track and field. The habits of the sports carry over into school performance. Girls set goals that help them stay focused and in line with their physical and emotional health. Coaches and parents begin to develop subconscious expectations for the athletes that keep them from getting involved with activities that they shouldn’t be involved in.
A test is done by Russell R. Pate, PhD; Stewart G. Trost, Ph.D.; Sarah Levin, Ph.D.; Marsha Dowda, DrPH found that approximately 70% of male students and 53% of female students reported participating in 1 or more sports teams in school and/or nonschool settings; rates varied substantially by age, sex, and ethnicity. Male sports participants were more likely than male nonparticipants to report fruit and vegetable consumption on the previous day and less likely to report cigarette smoking, cocaine and other illegal drug use, and trying to lose weight. Compared with female nonparticipants, female sports participants were more likely to report consumption of vegetables on the previous day and less likely to report having sexual intercourse in the past 3 months.
Participation in sports has been linked to success in math and science, subjects traditionally dominated by men. One explanation for this may be that sports help girls resist traditional gender scripts that limit persistence and competition in these areas. To explore this, we contrast the effects of sports on boys and girls in academic domains that are stereotyped as masculine (physics) and feminine (foreign language). Furthermore, we differentiate sports by those characterized as masculine or feminine to identify activities that may reinforce or challenge traditional gender norms. Overall, participation in sports has had positive effects. Compared to non-participants of the same sex, girls are more likely to take physics and foreign language, while boys are more likely to take a foreign language. The sports categories reveal divergent patterns for boys and girls, where masculine sports associated with physics for girls and foreign language for boys, while feminine sports are associated only with the foreign language for girls. These findings confirm prior research that sports improve academics, but suggest that sports do not have uniform effects. While some sports may potentially counteract traditional femininity and help girls persist in masculine domains, other sports may not provide the same benefits.(Crissey, S. R., Pearson, J., and Riegle-Crumb, C."Gender Differences in the Effects of Sports Participation on Academic Outcomes")
When being highly involved in sports, overall health becomes a top priority as well. Learning time management skills is key when every day consists of six hours of school, sports, family time and homework because otherwise the human body would be exhausted and worn down and would not be able to perform as well as they could. When people are in better physical shape, it is much more motivating to develop healthy eating habits that will last a lifetime. Developing healthy eating habits give people more energy to perform well in sports and exercise, and will also help prevent diseases such as heart disease and diabetes. Therefore, exercise through sports and exercise must be accompanied by a healthy overall lifestyle.
Benefits of Team Sports
Working with other athletes on a team creates a tight-knit community, and one learns to trust the other players and to rely on the help of others in order to obtain a common goal. The environment in a functioning team is collaborative and non-threatening; allowing for more open and focused learning. Skills such as combined effort and compromise are learned far quicker in competition. These sorts of connections can last beyond the field of play and carry into athletes’ social and business lives. For example, how one plays and communicates on the field can reflect how one communicates to members at a business meeting and how they work to obtain their goals. Working in teams can benefit a group to overcome difficult challenges because the minds and work of a group can be more powerful and successful than just one person. They allow for diversity in thought on how to approach a challenge and allow for the group to be sustained by constant support. Sports can make athletes more health-conscious, motivated, focused, and energetic. Being part of a team can enable athletes to communicate much better with others, consider others needs, solve critical thinking problems and become a leader.
There is currently an epidemic in America regarding overeating and unhealthy lifestyles. One major concern is the rising obesity rate in young children. Children are growing without a knowledge of correct diets and exercise and by the time they mature, they have become involved in a lifestyle that is unhealthy. In comparison to other countries, America is falling behind in the movement towards a healthier world. Other reasons for this recent spike concerning obesity in America are the rapid development of technology over the past century, which has almost completely removed physical exercise from our daily routines unless one makes a purposeful effort to exercise. Some examples of technology that are blamed are the invention of automobiles, which has taken away the aspect of walking from one place to another, and the invention of the assemble line in factories, which makes, packs, and ships food in a faster and more efficient way. Also from these developments we have achieved the ability to stock grocery store shelves with inexpensive, high calorie, good tasting food produced in bulk. These technological developments have allowed America as a society to grow in population, while at the same time damaging the health of its own citizens.
Healthy living and physical fitness are very important aspects in our daily lives. Being physically fit not only helps people live healthy lives, it also helps people live longer. If you are able to keep up an active lifestyle throughout your life you will be able to slow the onset of osteoporosis as well as reduce chronic disease risk. Also, people who make physical activity and exercise a part of their daily lives when they are young are more likely to keep it in their lives as they grow older and benefit from it throughout their lifespan. Physical activity is defined as any movement that spends energy. Exercise is a subset of physical activity, but it is an activity that is structured and planned. While many children engage in physical activity, usually by playing with their friends, and team sports the amount of physical activity they get as they grow into adolescents usually declines. In America, today obesity and being overweight occurs in over 20% of children. On top of that, inactivity and poor diet contribute to 300,000 deaths per year in America. It’s proven that significant health benefits can be obtained by including 30 min of moderate physical activity, which must be performed at a minimum of three days per week and can even be split up into three 10 minute chunks, which will reap the same results as one 30 minute session. However more frequent exercise will certainly lead to more rapid improvements.
There are numerous positive effects of participating in sports. First of all, being involved in sports ingrains in you a lot of values and disciplines in the sport you are playing and also in life. Playing in sports helps you develop teamwork with your teammates. Everyone on the team is striving for a common goal (to win) and it takes an unselfish team play to have success in sports. Success doesn’t come easy and in order to succeed in sports and in life, you will need to work your hardest to achieve your goals. When you practice dedication and hard work in a sport you play, you realize how much work it takes to succeed and in the future, it is more likely for you to succeed later in life. Sports are very positive.
Sport and Globalization
It can be observed that over the decades, a sport has become a vehicle for driving the effects of globalization, the process by which businesses or other organizations develop international influence or start operating on an international scale. This process has effects on the environment, on culture, on political systems, on economic development and prosperity, and on human physical well-being in societies around the world.
International teams and leagues and the participation in mega-sporting events fuel a cornered market that strays away from the small community ideology of sport and turns it into an industry. The some of the largest and easily recognizable examples include the Olympics and the World Cup. These events have become so incredibly massive by following marketing and business strategies rather than merely investing in the thrilling splendor of professional competition.
“Many of the accumulation strategies utilized by sports managers around the world were generally conceived in the United States” — this can support the perspective that the globalization of sport is rather an Americanization of the international industry. Sport as a market means that several large corporate entities have a share in the process of creating the global production. This includes the small group of mass global telecommunication networks, world renowned sports brands, transnational corporations, and international sports management firms. These groups determine the scheduling and productions of large global sporting events, take advantage of cheap overseas labor to produce sports equipment and apparel, promote certain leagues and teams internationally to sell merchandise and the franchises, and to control the careers of athletes centered around when and where they compete.
Lucie Thibault of Brock University mentions the diverse athlete origins that can be traced in professional leagues worldwide, the increase in the new participation in at international sports events by countries that had not participated before, and the increase in the number of athletes competing in sports that break many barriers of gender, religion, and climate all as positive implications of sports globalization. However, she also touches on the solidly negative truths of globalizing the sports industry. Thibault mentions the luring of athletes out of their homelands to compete for foreign countries, the overseas exploitation of third world peoples in the production of sportswear and equipment, and the ecological footprint of mega-sports events.
In today's market “Media have the expertise and technical equipment to produce sport into a package that can easily be consumed by spectators” and cultures around the world take part. The direction of international mega-sport and the effect it has on global economies, culture, and environment may or not be taking a turn for the worst. Some may suggest that it is creating more harm and negative impact than what it creates positive. Surely this is not massively advertised, but that does not mean all of its effects do not exist. The Olympics, the World Cups, Paralympics, and the Commonwealth Games are only a few examples of major events that fuel this industry and will continually be produced by TNCs, global telecommunications, and major sportswear and equipment companies. Athletes, teams, and leagues will be controlled, showcased, and used to promote events and brands in an effort to fuel the perpetually massive profits created by this method of globalization.
Culture Sharing Through International Competition
The International competition provides a unique platform for social statements to be made. Radio, television, and streaming technology allow athletes on a world stage to communicate values directly to people all around the world. Similar to federation or league competitions, international competitions attract a large, sustained viewer base. However, international competitions have a larger global viewership. ‘Mega-events’, such as the Olympic Games, are “...important points of reference for processes of change and modernisation within and between nation-states...”.
The Olympic Games in Mexico City, 1989 provided a platform for United State’s Black athletes to draw attention to the continuing racism in the states. Tommie Smith and John Carlos, gold and bronze medalists respectively in the 200m, stood at the podium shoeless, in black socks and Smith in a black scarf. Each raised a black-gloved fist into the air, a symbol of both black power and black unity. The white silver medalist in the 200m, Peter Norman of Australia, showed solidarity with the cause, wearing an OPHR (Olympic Project for Human Rights) pin. Smith and Carlos were condemned by the International Olympic Committee and received death threats. Returning home, they were praised by the African-American community.
Art stems from playful creativity; something that all human beings possess. Keep in mind that those activities described as “art” are different from free play because they abide by certain rules. Art includes sports, dancing, theater arts, etc. Artistic rules direct particular attention to, and provide standards for evaluating the form of the activities or objects that artists/players produce. Although, art is ultimately subjective and governed by the culture within which it is produced and created for.
Anthropologist Alexander Alland defines art as “play with form producing some aesthetically successful transformation-representation” (1977, 39). In Alland’s definition: form is the appropriate restriction(s) put upon the type of play being organized. For example, a painting is a two-dimensional form. “Aesthetically successful” means the creator of the piece of art and/or audience “experiences a positive or negative response” from the art piece. Something aesthetically poor in quality will have an unsuccessful response resulting in an emotion of indifference towards the art piece from an audience or even from the author. The most simplistic way to understanding the term transformational-representation is to notice that the symbolic meaning of anything is deeper than the surface appearance and that cultural guides what is appropriate and valued. Since Alland suggests that transformation-representation have a dependency on one another, the two should be referred to together as well. Transformation-representation is another way of talking about a metaphor. A drawing is a metaphoric transformation of experience into visible marks on a two-dimensional surface. Also, a poem metaphorically transforms experience into concentrated and tightened language.
Art by intention includes objects that were made to be art, such as Impressionist paintings. Art by appropriation, however, consists of all the other objects that “became art” because at a certain moment certain people decided that they belonged to a category of art. Most often the category was formed by Western society and the objects or activities may not necessarily fit in that same category in another society’s culture.
Anthropologist Shelly Errington argues that in order to transform an object into art, someone must be willing to display it. When Western society sees an item that fits their definition of art, it is placed on the “art” market. Errington also noted that the Western view of art tends to select objects that are: ‘portable, durable, useless for practical purposes in the secular West, and representational.’ A problem exists where Western’s definition of art begins to exploit certain cultures for their objects that offer ‘exotic’ allure. The demand for ‘exotic’ art in Western society, for example, is strong. This art is typically fashionable decoration at one moment and out of fashion next year. This “come-and-go” fashion can threaten international economic policies and resource extraction projects with the artifact bearing society. Like play, art challenges its contributors with providing alternative realities and the opportunity to comment on or change worldly views.
Impressionism was a term used to describe paintings that looked unfinished because they showed visible brushstrokes. The paintings depicted everyday life. In 1874, impressionist painters organized an exhibition in Paris that launched the impressionist movement. They called themselves the Anonymous Society of Artists. The most notable members were Claude Monet, Edgar Degas, and Camille Pissarro.
Post impressionism started in the late 1880s. Post Impressionist artists painted in a similar style as impressionist artists, but they added new ideas. They did not only paint what they saw in everyday life. They used more symbolism. The most famous members of this movement were Paul Gauguin, Georges Seurat, Vincent van Gogh, and Paul Cézanne. However, they worked separately and did not see themselves as a part of a movement.
A style of art pioneered by Pablo Picasso and Georges Braque in 1907. Cubism took ordinary shapes and broke them up into abstract geometric forms. Cubism played with perspective and form and broke the long established rules of traditional western art reinvigorating the art scene of the time. Cubism is considered by many to be one of the first forms of modern art.
Dada and Surrealist
Described as anti-art, Dadaism challenged what can be considered art. Dada was a response to the chaotic times at the start of WW1, an avant-garde rebellion throughout Europe. It was anti-war and a social critique of the conformity of the time. It quickly spread to Berlin where it was facilitated by the Bauhaus art school, along with modernism and surrealism, where the movement flourished. Possibly the most famous piece of Dada art was Marcel Duchamp's 'Fountain' which was a porcelain urinal on a pillar with the name 'R. Mutt' inscribed on it. 'Fountain' was an attempt by Duchamp to shift art from a creative process to an interpretive process which was a key part of the Dada movement. Other well-known works such as those by Salvador Dali, who created 'The Persistence of Memory' featuring melting clocks and 'Un Chien Andalou' a surreal short film are still widely studied today. Dadaism and Surrealism hold international acclaim and are an integral part of art history.
Realism Realism is an art form that has been around for many years and it consists of realistic precise drawings or painting that nearly replicate an image and it can be found in some famous painting such as the MonaLisa. This form of art is much more time consuming and detail oriented.
Merging and breaking down fine art and pop culture icons pop art was a satirization of the mass production culture of America. Pop art was a stark contrast to the serious and ultra creative abstract art of the time, it was playful and ironic and didn't take itself serious. Pop art called into question the images we knew and played off them.
Music is the use of rhythmic sounds and silences to form song. There are many different styles and genres, ranging from lyrical to instrumental, with countless sub genres in between. Much like art, types of music include music from a certain era (such as classical, and classic rock), or is dependent on the contents of the song, such as pop or metal. Each song is crafted by the songwriter to convey a certain meaning or range of meanings, but it is up to the listener to discern what the music means to them, and can vary greatly from person to person.
Music is defined as the organization of sounds and silence. The creation of music dates back almost as far as human history. The earliest discovered piece of music, an ancient Sumerian melody known as “Hurrian Hymn No. 6”, was discovered in the ruins of the city of Ugarit, Syria and dates back to the 14th century BCE. Discoveries such as this Sumerian recorded music and ancient instruments such as bone flutes indicates to historians that people in many cultures throughout time have incorporated the creation and expression of music into their cultures. In ancient times, the Greeks would use basic pipes to create phonic sounds and compose tunes. Although, it wasn't until later that music became true entertainment for people in their everyday lives. In the Medieval era, people began to record music through writing. The Church devoted huge amounts of money to the writing of Gregorian Chants, named for the Pope at the time. The Churches served as a valuable space for recording and saving music. With the invention of the printing press, however, more secular music became available to the public. As time went on new technological advances allowed for music, both new and old, to be shared across cultures. Music has proven throughout history to be a way for humans to share stories and express emotion. Other creatures, however, also use music as a way to portray an expression or communicate an idea. Music can come from something as small as a bird or as large as a whale. Music differs vastly across cultures and adapts to the people who listen to, compose, and create it. In fact, 20th and 21st century composers push the envelope of musical development even further to ask the question “what is music?” The answer, most often, is “everything.”
Song and Words
Although the major discussion of text and literature is within the chapter on [Communication and Language], the anthropological study of a song, or words as art, warrants its own discussion here in the context of play and art. A quote to keep in mind when studying cultural arts such as music and dance is "There is nothing more notable in Socrates than that he found the time when he as an old man, to learn music and dancing, and he thought is was time well spent."- Michael De Montaigne
In colloquial terms, classical music is considered any western music written or created up to the 1820 though the term is still applied to music created in today's age. Classical music is generally divided into seven different eras including Gregorian (from the era of Pope Gregory in the 600s), Medieval (500-1400), Renaissance (1400-1600), Baroque (1600-1750), Classical (1730-1820), Early Romantic, and Romantic (1780-1910). Each era consists of its own stylistic components that set each apart from one another. However, while all 'classical' music is generally considered one in the same, in reality the variations among each era make each unique and distinguishable from each other.
Modern Day Influences
Through this chain of development, from Baroque into Romantic, and then into modern music, what we hear in movies and video games would not be the same without all these previous influences. Many modern day composers, such as John Williams and Hans Zimmer, were heavily influenced by the Romantic era. One well-known example is found in John William's film score for the Star Wars film series. One Romantic composer William's drew from heavily was Gustav Holst: "The Planets has been mined for any number of sci-fi spectaculars, and Mars in particular has been a favorite of film composers including Williams, whose stormtroopers march to a distinctly Martian beat". Another composer William's was influenced by was Wagner, who was also influenced by Holst. The film score that was closet to a Wagner piece was Darth Vader's iconic theme: "Where the ordinary filmgoer most conspicuously hears Wagner in Star Wars, is in the brass-laden theme for Darth Vader and his evil Empire—which is distinctively reminiscent of Wagner's music for his majestic Valkyries" . Classical music's influence on Star Wars is only one example of many. Most every modern day composer draws ideas and influences from the music found throughout these seven eras.
Electronic Dance Music
Electronic Dance Music, also known as EDM, is an umbrella term for dance music that is electronically composed by a DJ (disc jockey) and often played in clubs, raves, and festivals. This emerged from the disco era in the 1980s. The attraction to EDM music at parties or on the dance floors is "the chemical and musical object of electronic dance music is capable of the virtualization of its immediate environment and the adjustment of the subject’s everyday life". EDM is often associated with drug use as many of their listeners partake in the use of both legal and illegal drugs (although not all people). Some of the most popular drugs to use at raves are Molly (Ecstasy), Adderall, Cocaine, Alcohol, and Marijuana. Due to the increase in drug use at raves or music festivals where EDM is most popular, anti-rave culture and laws have emerged. "As EDM cultures continue to expand globally it is necessary to adopt methodological approaches that are rooted in the local and at the same time engage with the global. Such approaches would be more fruitful and would offer a more accurate picture than focusing on one specific site of research". It is very common to see mostly young adults listening and going out to places that play EDM. Raves are often held at night when most people are going to sleep so "ravers slip into an existential void where the gaze of authority and the public do not penetrate. Electronic dance music has also been integrated into other genres by artists like Radiohead, LCD Soundsystem, Suicude, Afrika Bambaataa, David Bowie, and many more.".
Indie music is music that is produced without the help of major music labels. Indie is short for "Independent", and Indie artists usually do not associate themselves with big names labels. It is more of a "do-it-yourself" music genre. A lot of bands, not only in the US but all over the world, pride themselves in being able to make it big, without the help of a major label. Indie bands also tend to focus on the love of their music more, rather than just trying to make money. While Indie music is becoming more popular with the current generation, independent artists were first recognized in the 1980's, such as the B-52's and later Nirvana. These bands who have made a distinct name for themselves were once considered "college radio music" and made their careers through the independent music scene.
American Folk Music In American culture, folk music refers to the style that emerged in the 1960's. Typically folk artists use acoustic instruments and vocals to convey messages about current events, often with lyrics communicating the artist's views on social or political issues. The creation and national circulation of this music was extremely important and valuable in connecting the public to its own current events and creating a dialogue about what was going on. The Folk genre exploded in the 1960’s with artists like Bob Dylan and Joan Baez. Before the 1960's explosion of folk music into popular culture, folk music thrived with artists such as Woody Guthrie and Ramblin' Jack Elliot. Modern folk artists include The Tallest Man On Earth, Bon Iver, and Fleet Foxes.
It is nearly impossible to discuss folk music without mentioning Carl Sandburg. Born in 1878 in Illinois, Sandburg spent a lot of his early career traveling and working as a laborer on railroads. During this time, Sandburg acquired a vast variety of different songs and tunes. Sandburg became the first musician to be considered a "folk singer" because he performed the songs he had accumulated during his work. Sandburg compiled all of his favorite songs into what he called the American Songbag. One of his favorites from this collection was the song and symbol of the legend John Henry. John Henry symbolized the power of the black worker and their struggle against machine labor and nonblack laborers. For black culture at this time, this was a big deal. Carl Sandburg was one of the first musicians to openly support black workers. Through the song and the symbol of John Henry, Sandburg was able to revolutionize folk music and spread a powerful message against the mistreatment of blacks, especially in the workplace.
Rock and Roll
Rock and Roll is a form of music that evolved in the United States in the late 1940s and early 1950s. Rock and Roll incorporates elements from many genres including doo-wop, country, soul and gospel, but it is the most closely tied to the blues, a well known example of this is Elvis Presley's music. It is from here that it gains its earliest chord progressions and lyrical style. Many artists have gone on to cover and recreate the sounds of early blues musicians such as Son House, Robert Johnson, Ledbelly, and BB King(the king of blues). This style spread to the rest of the world, causing a huge impact on society. Rock and roll is characterized by an emphasized off beat, or the 2nd and 4th beat of a four-four time signature, guitar use, electronically amplified instrumentation and lyrics that range in terms of subject matter. Since the creation of Rock and Roll in the late 1940s there have been many new genres of rock and roll including heavy metal, punk rock, soft rock, alternative, indie, and alternative.
New York was an important center for several styles of popular music. Swing Dance bands and the crooners who sang with these bands helped keep American optimism and spirit alive through World War II. Rock music developed out of the number of different styles of music that existed in the forties and became a style of its own in the early fifties. In many ways, the popularity of rock music among both black and white musicians and fans aided the movement toward racial integration and mutual respect of people of any ethnic background. Music served as a unifying common ground among citizens, especially during political, social, or economic unrest. Music was something that everyone, despite their lifestyle, could relate to and enjoy.
Most often, rap is known as the reciting of rhymes to a rhythmic beat, but its roots extend far beyond that. The origins of rap music can be traced all the way to West Africa where it originated. Those who possessed this musical talent were held in high regards to those around them. Later, when the "men of words" were brought to the New World, a new creation of African music and American music were mixed together to create a new sound. Throughout history, there have been various forms of verbal acrobatics involving rhyme schemes in which rap has manifested, including schoolyard and nursery rhymes as well as double Dutch jump rope chants. Modern day rap music finds its immediate roots in the toasting and dub talk over elements of reggae music. However, reggae was not immediately accepted and thus evolved into something else entirely. One of the first artists to adopt this style was Kool Herc.
Early raps involved reciting improvised rhymes over instrumental or percussive sections of popular songs, often incorporating the use of common slang words. Rap grew throughout the seventies, evolving into a musical form of verbal skill and free expression. It quickly became popular among a younger crowd, giving them an outlet that allowed freedom of expression of individuality. Today, rap continues to be popular in cultures around the world, evolving and moulding itself to fit every culture that it reaches. An example of the globalization of rap music is the group Orishas . Orishas originated in Havana, Cuba, and often incorporates traditional salsa and rumba beats to their music. The members of Orishas emigrated to Paris, France, and are now extremely popular in Europe, as well as their native Cuba. Rap is a genre of music that recently became popular with the youth of the U.S.A. The rhythmic vocal characteristics are similar to spoken Japanese. This "gangster life" connotation evolved from the American dream - the ability to work your way up from the ghetto to the high life of a rap superstar. The lyrics often include acts of violence, drugs, extortion, and sex. This sub-culture, created in the early 90's, has flooded mainstream music, topping charts on popular television stations and encompassing the radio. Despite some controversial aspects of the rap music scene, it continues to grow, influencing music across the world. African hip-hop/rap groups have recently started creating more music, claiming the original rap genre for their own, where it was thought to have originated thousands of years ago.
Though "gangster rap" is the wider known as "rap", it is not the only type. With rap comes many subcultures, and some of these move away from this "gangster" mentality. You do not have to be a gangster, or from the ghetto to be a rap artist. People often do not think there is more content than sex, drugs, and violence in rap music because most mainstream rap and rap videos have led the majority of people to believe that is what rap is about. Rap originally stemmed as a form of protest for people who didn't have a voice before. South African youth used it as a way to rebel from the apartheid and oppression, which broke open in 1976. In parts of Africa (mainly in West Africa) rap as we know it has become very popular, but with a twist. African rap artists use many American influences as to their production and song structure, but have very different vocal styles, instrumentations, and lyrics. This blend of Western rap and African music is sometimes called "High-life". Rap is just a genre of music - it goes a lot deeper than what is heard on the radio.
Rap plays a roll in cultures all over the world. Rap artists all over the world, and even different parts of a town or neighborhood, have their own style and originality. Although most rappers 'bite' or copy the style of another artist, they want to be known for having their own style and being unique in their own ways. In the United States, rap can be extremely influential. Rap artists can develop what is known as 'beef' with one another where they have developed a hate relationship/feud due to problems in the rap culture. They sometimes rap about their enemies as a way of retaliation without escalating into violence. However, this sometimes can induce violence and artists can lose their lives. In the case of 2pac (Tupac Shakur) and The Notorious B.I.G. (Cristopher Wallace), some of the most well known rap artists, resolved their 'beef' with violence and they both were shot dead in the mid-1990's.
The violence and language in rap music has been a concern of the United States Congress. On September 25, 2008 in a hearing convened by Representative Bobby L. Rush, Democrat of Illinois, lawmakers asked music industry executives about their company's role in the production of explicit rap, at one point inviting them to read aloud from rap artist 50 Cent’s lyrics (lyrics known to be rather explicit). Some Parents feel that their children are threatened by the violence in rap music because it makes them devalue life. US Congress and society alike are in a torn situation wishing for 'cleaner' music with a more positive message for society and maintianing the freedom of speech to artists.
Hip-hop was born in the late 1970’s in New York City as a form of street art. Hip-Hop began in South Bronx under the working class African-Americans, West Indians, and Latinos. Youth Hip-Hop is comprised of four main elements: Rap (vocal), DJ (Playing and technical manipulation of records), Graffiti (aerosol art), and B-boy or B-girl (freestyle dancing). These four components of Hip-Hop were derived from the youthful population that were trying to represent themselves through these competitive, innovative, and expressive activities. This type of music has also traveled all over the world and many people in different cultures are now taking the "Hip-Hop" idea from the United States and making their own. For example in Dakar, Senegal the artists use Hip-Hop to express political views and their struggles that they experience without the right government. This was discussed in a documentary made by musicians called "Democracy in Dakar". The Hip-Hop music in Dakar is overall more controversial and political than the Hip-Hop in the United States because of the battles with their government.
Hip-Hop has been compared the Blues of the Modern Era in the sense that it is a form of expressing pain and struggle. The struggle is what makes Hip-Hop different across the globe. Different parts of the world have different pains and struggles and they can be heard and highlighted in the songs. At the surface all Hip-Hop culture may look and sound similar, but one can notice the huge differences in the lyrical content and in the structure of the beat.
In countries that are more politically aware, Hip-Hop artists rap about the political struggles that their countries are experiencing, like in Senegal. In the United States, you can hear lyrics about both the struggle to survive in tough neighborhoods as well as political messages. Hip-Hop artists incorporate elections, war, economic struggle, and oppression into their lyrics. Some of the more mainstream artists may not have as many controversial lyrics as some of the underground artists, but the messages are still there.
Ian Condry is a cultural anthropologist who studied Japanese hip-hop for a year and half in 1995. His work showed how Japanese hip-hop originally came from the United States, but has now created it’s own identity. The Japanese hip hop culture is similar to that of the United States in that people go to clubs to listen to well known performers. However, in Tokyo, a show will start at midnight and end at 5am. In these clubs, people will not only dance, but they will also do business deals. Another difference is that well known hip hop artists live at home with their parents and live the rest of their life just like everyone else. This is much different from the United States where hip-hop artists are some of the most rich and famous people in the country.
Japanese dancers and artists consider certain nightclubs to be the “genba” (or “actual site”) of where Japanese hip-hop is established. These nightclubs are places where hip-hop is performed, consumed and then transformed through local language and through the society of these clubs. These nightclubs are also a place for the mingling of dancers, artists, writers and music company people.
Country music was founded in the early 1920s and descended from folk music. The music style primarily came from the southern area of the United States. Early country produced two of the most influential artists of all time: Johnny Cash and Hank Williams. Although their impact on music was not recognized until after their death, both have surely shaped the way lyrics are written and the way songs are performed in all genres of music history. In 2006, country music increased by 17.7 percent to 36 million. The music has stayed steady for decades, reaching 77.3 million adults everyday on the radio. Country music is not only a big genre in the United States, but all over the world in countries like Australia and Canada. Country has many styles and sounds that have been put in to categories. Hillbilly boogie, bluegrass, folk, gospel, honky tonk, rockabilly, country soul, country rock, outlaw, country pop, neo-country, truck driving country, and alternative country are all the types of music that country has to offer.
A cappella is a style of only vocal performance. It is distinct in that it is vocal performance without any accompaniment. Many times, when people sing, it is done along with a piano, guitar or various other instruments. However, the a cappella style of singing is characterized by no additional instrumental performance. A cappella literally translates to 'in the manner of the chapel', as music was traditionally performed without instruments in the church.
While services in the Temple in Jerusalem included musical instruments, traditional Jewish religious services after the destruction of the Temple do not include musical instruments. The use of musical instruments is traditionally forbidden on the Sabbath out of concern that players would be tempted to repair their instruments, which is forbidden on those days. (This prohibition has been relaxed in many Reform and some Conservative congregations.) Similarly, when Jewish families and larger groups sing traditional Sabbath songs known as zemirot outside the context of formal religious services, they usually do so a cappella, and Bar and Bat Mitzvah celebrations on the Sabbath sometimes feature entertainment by a cappella ensembles. During the Three Weeks use of musical instruments is traditionally prohibited. Many Jews consider a portion of the 49-day period of the counting of the omer between Passover and Shavuot to be a time of semi-mourning and instrumental music is not allowed during that time. This has led to a tradition of a cappella singing sometimes known as sefirah music.
"Keep the Whole World Singing" (barbershop.org) is the motto of the Barbershop Harmony Society. Affiliated with countries world wide such as Finland, Australia, New Zealand, Germany, Ireland, South Africa, Sweden, The Netherlands, and Great Britain, the purpose of the Barbershop Society is to celebrate harmony in the barbershop style, promoting fellowship and friendship among men of good will.
One can find barbershop songs from a variety of time periods and genres which gives everyone the opportunity to relate to the barbershop style. Such examples are Justin Timberlake's "Sexyback", Michael Jackson's "Thriller", BYU's "Super Mario Bro.'s Melody", and "Come Fly With Me" as performed by Realtime quartet.
A common misconception is that barbershop style music is only written for and sung by men. Female barbershop quartets, sometimes called "beautyshop quartets", also exist and many thrive. A society for four-part female groups are The Sweet Adelines International ( watch youtube video ). One of the more familiar "pop" groups is The Chordettes, made famous because of their songs "Mr. Sandman" and "Lollipop".
Cajun, Creole, and Zydeco Music
The influences of Cajun style and Creole music, which evolved into Zydeco, a more contemporary form, can only be found in southwest Louisiana; a blend of European, African, and Amerindian styles. This music is unique in its qualities and is claimed to have come from Nova Scotia in 1755, as the Acadie brought with them music with French origins. The stories told through the music come from European stories that have been altered to fit the lifestyles and life experiences in the south of the New World. Over time and through the 19th century the music has been transformed through the influence of African rhythms, blues, and improvisational singing as well as many singing styles and techniques derived from Native Americans. The fiddle was used for song and dances. Barry Ancelet, author of his monograph Cajun Music: Its Origins and Development, describes how Cappella dance was also used for dance, supplying the rhythm and beats through clapping and stomping.
Jamaica: The Mento
In 1951 the first Jamaican recording studio opened. A new type of music was formed by combining European and African folk dance music together. Disc-jockeys such as Clement Dodd (the "Downbeat") and Duke Reid (the "Trojan") traveled around the island playing there music. The people of the Jamaican ghettos were unable to afford bands, so they hired people like Dodd and Reid. By the end of the 1950’s it transformed into Caribbean music and New Orleans' "rhythm'n'blues". As time went on the music changed to a dominant bass instrument with ska.
Ska is a musical genre that originated in the 1950s in Jamaica and led to the creation of rocksteady and reggae. The history of ska is typically divided into three parts, or waves. The first wave is the original ska scene that developed in Jamaica. The second is the scene that developed in Britain in the 1970s. This music is different from the original Jamaican ska because it usually possessed more well-developed compositions, faster tempos and a less-polished aesthetic. Additionally, both influences drawn from punk-rock. The Specials, a 2-Tone Ska band from Coventry, England, is typically seen as the archetypal second-wave ska band. The third wave of ska involved artists from most of the Western world. This period beginning in the late 1980s was the first time ska had become popular in the United States. Bands from the third wave include Streetlight Manifesto, Reel Big Fish, and Mustard Plug. http://www.sfgate.com/entertainment/article/A-brief-history-of-ska-3221107.php
Reggae music is a genre that originated in Jamaica's late 1960's and speaks to the struggle fought by grassroots warriors. Worshiping the offbeat, reggae often accents the second and fourth beats of each bar. To Jamaicans, reggae means "the king's music," and the king to whom it refers was Haile Selassie, the emperor of Ethiopia. Reggae groups used modern amplified instruments, including lead and rhythm guitars, piano, organ, drums, and electric bass guitar, along with Jamaican percussion instruments (Charlton, Katherine. "Rock Music Styles"). Common themes found in on reggae records include peace, love, religion, poverty, and/or injustice. A familiar example of a popular rock n' roll song exhibiting the reggae-style riddim is the Beatles' "Ob-La-Di, Ob-La-Da". The roots of reggae are tied tightly to the Rastafari movement and sometimes encourage the praise of Jah through the smoking of marijuana.
Western music has greatly influenced the music in the Philippines. The most logical explanation behind this is the historical fact that the Philippines are the oldest Western-colonized Asian country. They were exposed to two mainstream, western cultures for over three and a half centuries. The Mediterranean, through Spain and Anglo-Saxon and The United States of America.
. The classical renditions of Filipino music show the blend of varieties of culture. This is not to say that you won’t come across native compositions but just that those nuances of Western form of music like symphonies, sonatas, and concertos are too much used. Filipino music has yielded international composers like Antonio Molina, Felipe Padilla de Leon, Eliseo Pájaro and José Maceda, known to be the avant-garde composer of the country.
Filipino music is generally played with traditional and indigenous instruments like a zither with bamboo strings, tubular bamboo resonators; wooden lutes and guitars and the git-git, a wooden three-string bowed instrument. In fact you may come across Filipino communities having their individual folk songs to be sung at special events like hele, a lullaby, the talindaw, a seafaring song, the kumintang, a warrior song and the kundiman, a love song.
Korean pop music has been trending in South Korea since the 90s, but hasn't gone global until recent years. Also known as the 'Hallyu Wave', Korean pop has become a worldwide phenomenon earning top places on US billboard and iTunes charts. Recently popular Korean pop group BTS broke headlines ranking no.1 in worldwide albums on the billboard charts the second week of October 2016. K-pop, a shorter term, has it's roots embedded in Korean society sine the early 20th century with a popular genre of music called trot with a similar sound to foxtrot. It wasn't until the 90s that pop music in Korea transformed incorporating American styles such as techno, rap, and rock. The formation of boy bands and girl bands also became a staple. This new style of K-pop gained popular interest in eastern Asian countries such as china, Taiwan, Singapore, Vietnam, and Japan.
The culture around K-pop has always been a fascinating and controversial one. Large entertainment companies hold auditions or scout out young adults ranging from the age of 10–20 years old. These teens are trained in dancing, singing, and entertainment skills for years until they are fit to debut in group. Unlike western musical groups where many bands have a lot of free will on their content, most Korean pop groups are limited on the content they create. The entertainment companies that manage these bands usually have teams that create the music, choreograph the dances, and even control the appearance of the members. Most of these groups consist of all males or all females. Many K-pop idols are not even full Korean or Korean at all. In the past several years entertainment companies have scouted and held auditions in other countries looking for foreign potential which truly places itself on a global level.
Literature is a significant part of cultures around the world. A lot of time is spent reading and discussing important written works, books that connect readers to different time periods and social spheres. Novels have much to teach it readers, themes of friendship, love, and loyalty are touched on often, with the effect of reaching a reader and developing different perspectives. Books written about the past may be warnings of the importance of learning from mistakes or a way for a reader to connect to someone from a different culture. The study of literature has a great effect on society and the development of new ideas based on what we know about the past.
In his writing, Tolkien tackles global and timeless themes such as the human condition, conservation, and the corruption of power. Unlike many writers, Tolkien disliked using analogies, and instead wrote in such a way that he encapsulated overarching ideologies in human history rather than specific points in time (a la George Lucas's "Star Wars" Empire being an analogy for Nazi Germany). Tolkien is also regarded for his thorough descriptions of nature in his stories, which make his epic "The Lord of The Rings" difficult to grasp for any but the most devoted readers.
In his later years, Tolkien taught at Oxford University alongside fellow author CS Lewis (author of The Chronicles of Narnia), with whom he created a writing club called The Inklings. Encouraged by his academic colleagues, he invented the fantasy world of Middle Earth, the language of the Elves, characters like Aragorn the Straddler, Tom Bombadil, and the evil Cygons. In crafting Middle Earth, Tolkien combined influence from English folklore and mythology with Norse mythology and biblical lore. Tolkien spent more than ten years writing the primary narrative and the appendixes to the Lord of the Rings series, during which he always had the support of the Inklings, most of all from his close friend Lewis.. Tolkien's novels- such as The Hobbit often include coming-of-age elements and follow the Hero's Journey plot. His legacy is survived by his son Christopher, who has spent his life editing his father's posthumously published works, such as The Silmarillion and The Children of Húrin.
J.K. Rowling, known most notably for her young adult fantasy novel series Harry Potter, has been an influential literary figure since her series found fame. The Harry Potter franchise has been a global and cultural phenomenon, and the novels have been popular among children, teens, and adults, becoming one of the best selling book series in history.
In creating the fantasy world of Harry Potter, Rowling drew much inspiration from various mythologies, particularly in regard to the fantastical creatures inhabiting the world, and on European folklore of witchcraft and magic.
These features of mythology and folklore make the Harry Potter series accessible to a wide audience familiar with similar stories and myths that have been a feature of western European and American culture for centuries. They are also made accessible to a wide audience by virtue of their readability, for in being young adult novels they are simple enough for children to read, but complex enough to hold the attention of adults as well.
The Harry Potter novels have thus permeated popular culture, and have been a wellspring of literary value in that they have encouraged many younger readers in literary pursuits and impacted child and teen readership over the past twenty years.
The novels can be considered a cultural influence not just in their immense popularity, but in the values they promote that are generally considered positive by western cultural standards in regard to child development of morality. Fables, mythologies, sagas, and other fantastical stories have long been used as tools to encourage behavior in children (and even adults) that adheres to cultural norms of morality—this trend is continued by the Harry Potter series, whose reach ensures that the cultural virtues presented in the novels are instilled in numerous young readers.
Charles Micheal Palahnuik has written a handful for popular and unique novels. He has created novels that are categorized as horror but without containing supernatural events. His books are filed into the horror genre because his characters are shaped by society and go through traumatic events that led to their self destruction. Chucks books can create the invisible window people look through and see what society can cause people to do. It has been said that Chuck Palahniuk has been influenced by the minimalist Tom Spanbauer. It was Tom Spanbauer's writing workshops that got Chuck to start his novels, such as his first one "Invisible Monster." This was rejected the first time by publishers because it was viewed as too disturbing. People find the horrible truth that Chuck reveals can be too much for the common person in society.
Plato's discussions of rhetoric and poetry are both extensive and influential. Teaching among middle school, high school, and college students, he sets the agenda for the subsequent tradition yet understanding his remarks about each of these topics—rhetoric and poetry—presents us with significant philosophical and interpretive challenges. It is not clear why he links the two topics together so closely (he suggests that poetry is kind of rhetoric). Plato's famous statement that “there is an old quarrel between philosophy and poetry” (Republic, 607b5-6) states that there is a clash of values among these two statements.. Plato is (perhaps paradoxically) known for the poetic and rhetoric qualities of his own writings, such as in The Iliad, and is represented through The Odyssey.
Haruki Murikami is a Japanese writer born in Kyoto. His large influences by Western culture are often apparent in his writing. This is one of the characteristics that set him apart from other writers. Another of these characteristics would be his many references to classical music within the themes and titles of his writing. His works mainly consist of surrealist post modern fiction. Murikami has a unique way of blending his Japanese heritage with his Western influences making it both familiar yet foreign to the reader.
One of the most influential horror genre writers of the recent times, his literature has been able cross multiple regions of the world and came over into the film sector as well. The tales he has written have had lasting impacts on references used in the more recent decades. Such as (IT, Christine, Pet Cemetery, Etc.) these iconic book and film adaptations have seen the rise in development and have shown to endure the test of time. Several films have also made a resurgence in recreation in recent years. The literature changed how supernatural and realistic horrors can be blended to develop a true fear of seemingly normal objects or concepts creating a strong following and culture.
A recent development in literature is the age of digital publishing and the rise of the e-book (electronic book). Instead of books, newspapers and magazines being printed onto paper, digital publishing has created an environmentally friendly and convenient way to read. The major difference between digital publishing and printed publishing is that in digital publishing there is no physical copy. This means that there is no paper and that no ink is needed to create the product. This is a massive change for literature.
The benefits of this change are convenience and accessibility. With e-books, literature can be accessed on any e-reader, phone, tablet or laptop and as such, they have the added convenience of large amounts of reading material per small amounts of space. For example, e-books became very popular on public transportation in Japan. Before e-books, small versions of manga, Japanese graphic novels, were carried and read on public transit. Now the small versions of manga have been replaced by their e-book counterparts. Accessibility is an important improvement on how readers can get ahold of literature. Digital publishing has no limit on how much can be held, unlike libraries or bookstores. Libraries and bookstores are only going to provided books that are expected to be rented or bought. E-books create a never-ending supply of literature, from the huge hits like Harry Potter or Lord of the Rings to the unknown works of a self-publisher. The author Hans Roosendaal summed up this process well when he said that digital publishing "gives authors the ability to increase the visibility of their works or makes it easier for readers to do a database search. The use of it shortens the information cycle." The well known distributors of e-books are major companies like Apple iBook, Amazon Kindle, Barnes and Noble Nook, or Google Play Bookstore. The downfall of printed literature can be seen in the decline of libraries and the bankruptcies of major bookstores that haven't adapted to the new world of e-books.
Dance is moving rhythmically to music to increase enjoyment of the experience. However, if the moving is not to music than the silence is engaged to prove a point. Dance can be created by a set of sequenced steps. It is used as a form of expression, social interaction, and a way of presentation in different cultures. Dance also may be regarded as a form of nonverbal communication between humans, and is also performed by other animals. Different dances require different skill level, some may be more physically exhausting than others. Regardless of the technique or style, If the proper physics are not taken into consideration, injuries may occur.
Dance in South America
The Argentine Tango originated around 1880 in the periphery of Buenos Aires, Argentina. The dance was popularized in bars, cafés, gambling houses, and brothels. Because the original lyrics frequently referred to sex and obscenities, it is logical that the popularization took place in the underground society. During this time period, even dancing in front of each other or touching at all was considered too much, so the tango’s close embrace and cheek-to-cheek dancing was considered raunchy. Initially people of good reputation looked down on the tango and wanted no part in it. This meant that if a man wanted to practice the dance, his only possible partner was another man. The men got together and practiced the dance as a way of capturing the attention of women.
Eventually the tango slowly started to catch on in Boarding House Common Areas, where immigrants stayed. It took a while to spread, but eventually it caught on after some of the movements were “purified.” Even then the Tango was still generally something that the middle and upper class would keep secret; it was still considered shameful and sinful. It was not until the Argentine Tango made its way to Europe that it was truly accepted in higher society. After it was introduced to Parisian nobility, it became the craze of the time there. When the tango finally came back to Argentina, it was “received as the most beloved son.”—Sergio Suppa
Dance in the Philippines
The traditional dances of the Philippines reflect the cultural influences of the Spaniards, Muslims, Indians, Middle Easterners, and Western Europeans. Each region of the Philippines that was influenced by a separate culture developed its own traditional style of dance. Many folk dances were also created to imitate the early lifestyle of the Filipinos and for spiritual purposes such as warding off evil spirits. Some of the most traditional dances of the Philippines are the following:
Muslim Influenced Dance
Towards the end of the 12th century, traders and settlers from Borneo and the Malay Peninsula came to the Philippine Islands and brought Islam to the Filipinos. Today, there are more than 1 million Muslim Filipinos residing in Mindanao and the Sulu Archipelago. When the Spanish came to the Philippines, the Filipino Muslims, also known as Moros, were able to resist being conquered and as a result, their Islamic lifestyle remains untouched, for the most part, even until this day, despite the completely different lifestyle of the rest of the Filipino population. There are four main Muslim ethnic groups: the Maranao, Maguindanao, Samal, and Tausug. The traditional dances in this suite make use of bright colors and rhythmic movements that represent the Middle Eastern and Indo-Malaysian influence on the culture. In this suite there is also a ribbon dance that was most likely a result of Arabian influence. Thought to be the most difficult Philippine dance is the Singkil Dance of the Maguindanao in which a woman of royal blood advertises herself to suitors by gracefully dancing with an umbrella, fan or neither while skilfully moving with bamboo poles. Another dance inspired by the war between the Muslims and the Christians is the Maglalatik which originated from the Laguna province. In this dance, the Moros wear blue pants and the Christians wear red pants. In the first half of the dance, the war over the residue of coconut milk is depicted followed by the reconciliation between the two groups. This suite features specific costumes: The Malong which is a tube-like dress that is worn in a variety of ways, and the Kumbong which is a traditional headdress. The instruments played in accompaniment with the dancing are: the Agong which is a brass gong with a knob at its center, and the Kulintang which is a collection of brass gongs laid on a wooden frame.
Barrio Fiesta Dance
Great preparation is taken for Fiestas and special occasions. Food, music, dance, games, and traditional processions are all part of this traditional occurrence in Filipino villages. If the fiesta is for a wedding celebration called a Gala (Boholano), it is customary for the bride and groom to arrive with their friends and be entertained by the people who cater to them. The entertainment includes dance and musical performances as well as clashing of pots, pans, ladles, and utensils to create excitement through noise. It is then tradition for the guests to stick paper money to the bride and grooms clothing right before the final dance which involves the newlyweds participating in playful chasing. Another popular dance in this suite is called the Kalatong which is a dance from the province of Batangas and incorporates bamboo pipes used as percussion instruments. The last dance in this suite is the Tinikling; a dance that copies the movements of the long-legged Tikling bird which hops over the traps set by farmers among the rice stalks. When Philippine dancers do this dance, they hop over bamboo poles in complicated and highly coordinated leaps while the poles are being clashed together and slapped to the floor beneath them. The Tinikling is a playful courtship dance, as are most indigenous dances, that becomes more complicated as it progresses. Tinikling originated from the islands of Leyte and is the official Philippine national dance. The costumes in this suite are the Balintawak which is a floor length dress with stiff butterfly sleeves and a vividly colored overskirt that matches the sleeves. The men wear colorful shirts called Camisa de Chinos. Props for these dances usually include an oil lamp called a Tinggoy, and wooden clogs called Bakya.
The Maria Clara Dance
Maria Clara is a legendary figure in the Philippines who symbolizes the virtues and nobility of the upstanding Filipina woman. She was the main female character in a literary piece by Jose Rizal about the colonizing of the Philippines by the Spaniards. A style of dance and dress was created in honor of her, and portrays its Spanish influence. The Maria Clara dress is formal attire made of an intricately designed blouse and a flowing skirt with a panuelo (square of natural fibers) worn over the shoulders. While men are in a Barong Tagalog, which is a traditional Filipino shirt typically made of pineapple fibers with long sleeves and detailed embroidery. Props for this dance are bamboo castanets and the abanico (Asian fan). This suite consists of many different dances that mean different things to the Philippine culture.
The Igorot are a Philippine tribal people living in the central cordillera area of Northern Luzon. The six different tribes, known collectively as the Igorot, are the: Apayao, Bontoc, Ibaloy, Ifugao, Kalinga, and Kankanay. These peoples prefer to be referred to by their separate tribal names rather than simply as Igorot which was the classification word ascribed to them by the Spaniards. These tribes have religious beliefs in common that conjoin them to nature. They also honor household gods with special offerings. Dance is performed at their ceremonies as an expression of community harmony, as appeasement to their gods, in honor of their ancestors, to heal sickness, to attain the support of their gods for upcoming wars, to keep bad luck away, to seek deliverance from natural disasters, to insure a plentiful harvest, pleasant weather and to celebrate the circle of life. In these dances, women place jars and/or baskets on their heads to demonstrate the role of women in the community as food gatherers and water fetchers. For the men, there is the Manmanok dance where they use bright, woven blankets to attract the women, and the Takiling where the men dance and chant while they beat on their gangsa, brass gongs, to demonstrate their skill in weapons and hunting.
Dance in the Philippines is greatly influence by the Spanish due to the Spanish Regime. Dances and music took on the tempo and style of European dances. For example, the tempos of the Tinikling dance and the Itik-Itik acquired the tempo of the Jota and Polka. Some more examples of dances that Filipinos are known for are:
Pandango Sa Ilaw: A Spanish dance which requires a good amount of balancing skills due to having to hold three oil lamps on the head and the back of each hand. This dance originated from Lubang Island, Mindoro.
Cariñosa: The name of this dance describes a woman who is affectionate, friendly and loveable. This dance includes using fans and handkerchiefs while being in a flirtatious manner.
Rigodon: This dance originated in Spain and is most commonly used at formal affairs.
Tinikling: The national folk dance involves a pair of dancers hopping between two bamboo poles, which are held just above the ground while being struck together at in relation to the music. This dance imitates the agility and grace birds used to avoid the bamboo traps in the fields set by rice farmers. The dancers symbolize the birds, therefore displaying their agility through footwork, while the bamboo poles symbolize the traps.
Dance in Zulu and Masai Culture
Dance is a very important part of many African cultures. This is true for the Zulu and Masai in particular. Both cultures are pastoralist and have many other cultural similarities. Despite this, they express their dance very differently. To explain this we will delve into various cultural aspects of Masai and Zulu society in which dance is used to find societal similarities as we as stylistic dance differences. To begin we must first take a look at some societal similarities between the Zulu and the Masai. This includes age sets, raiding traditions, and the importance of cattle. Emphasis of the Zulu society was on warfare and raiding. Age sets played a large role in this as young men were divided into these and at a certain age set were raiders and warriors. During raids, Zulu warriors would pick up cattle, which were a measure of wealth in their society. Shaka, the uniter of the ancient Zulu nation, gave the Zulu their pride in warfare with his dynasty. Military service was mandatory and rigorous training. He also revolutionized the style of combat with his bullhorns method and his short stabbing spear, which was also used in warrior dances. His constant invasion of other societies is what gave his empire so much power and it instilled a sense of nationalism in his people. Warriors were chosen by what age set they were in. Age sets having been a group of people within about a ten-year age span. Many times these age sets were organized into elders, warriors, and children. In the process of initiation after puberty, women had a special dance that was performed. In Shaka’s society cattle were a measure of wealth. If you didn’t own cattle you couldn’t get married or pay for luxuries. Cattle could be earned by raiding other societies or through outstanding military action. Sacrificing them was also a large part of their society; making sacrifices for a safe return from battle or in preparation for a successful one. The Masai culture, in contrast, considered themselves a purely pastoralist society and consequently placed a lot of emphasis on cattle. They were also a raiding society. Although they used hunting as a part of initiation ceremonies it was not a regular occurrence in Masai society. Like the Zulu, the Masai used cattle as a form of wealth. They found cattle so sacred that they would not eat meat from the cow and drink milk from it in the same meal because they saw it as disrespectful to mix those things taken from the living with those taken from the dead. The Masai also believed that all cattle were rightfully theirs given by God and so were justified in taking them from other tribes.
Both the Zulu and Masai kingdoms placed an emphasis on war and raiding. It is natural then that they had a dance to accompany and portray these actions. They both had a name for these warriors. The Zulu warriors were called Indlamu and the Masai were known as the Moran. In the case of the Zulu, their dance was named after their warriors. Many times Zulu dance was characterized by its stomping movements, which had a feeling of heaviness and connection with the earth. One example of this was the Indlamu, or warrior dance. This dance was performed at weddings along with other dances. Typically the Indlamu, or Zulu war dance, was performed in a large group with the dancers entering in two by two. It was performed in unison and in some versions had three sections, the entry, and preparation followed by two routines. There was one leader who gave the cues for when to begin and when to end. This was usually characterized by a foot stomp. In the version with three sections the first section of the dance was the entry where the men are crouched and moving in a circle around the dance area; the dancers then sat as their leader did a solo. When his solo was done the leader gave the signal to start the main section of the dance, which was performed in all versions. This final section was performed using a series of stomps in rhythm to the beat of sticks, or in some cases a drum. It also included a series of kicks, which varied between tribes but usually consisted of either a leg thrust straight in front of them or thrust from the front and carried around to the side. In both instances the leg stayed bent. The dress for this occasion was usually traditional. Ostrich feathers were tied to the legs below the knees and on the upper arms in some cases. They also wore loincloths. As they danced they carried their shields and a spear. They also had a headdress that was similar in style to a crown. The Masai also had a warrior dance called the Adumu. It was a ceremonial dance done for themselves: to form a trance-like state for the warrior. This dance, unlike its Zulu counterpart, was not performed for weddings but was instead used as a mental preparation. It was a test of strength and endurance. The dance began with the warriors creating a circular formation. Unlike the Zulu, The Masai warriors started out standing around the outside of the circle swaying back and forth and then one or two came to the center to start the dance. They jumped up and down in a straight rod-like fashion with the goal of coming into a trance-like state. For the Masai when the person in the middle gets tired he is replaced with someone from the outside of the circle. The rhythm for this dance was found in a chant that the warriors forming the edges of the circle sang while the dancers in the middle jumped higher and higher into the air. During the warrior stage of life in which this dance was performed the Masai wore their hair in long braids. Their traditional clothing was made of red cotton and very conservative in comparison to the Zulu attire of a loincloth. The cloth covered them from their chest down and was sometimes similar to a dress in its appearance. There is a very obvious contrast in these two styles of warrior dance. The Zulu with their creation of this connection through their body with the earth is almost polar opposite of the Masai who are reaching up into the sky with their jumping movements. The formation of the Masai differs from that of the Zulu in that the Zulu had a very militaristic line formation to their dance while the Masai stood in a circle. There was also no specified person to begin the Masai dance while the military leader is the designated beginner in the Zulu version. The setting in which these dances were performed is another difference. The Masai dance was performed as a mental preparation and was not intended to be a public event but the Zulu dance was performed at weddings and other occasions. The source of the beat in the Zulu dance came from sticks instead of from a chant like in the Masai dance. The Masai and Zulu had very different costuming choices as well. The Zulu chose to wear loincloths. The Masai chose to wear long red robes, which is a stark contrast to the loincloth.
As previously stated both societies placed an emphasis on cattle. Once a young man earned enough cattle he could be married and there was a ceremony. During that ceremony there was dancing. This was true of both the Zulu and the Masai.
The Zulu had a different dance that they perform at weddings called the Inkondlo. This dance was performed as the bride made her entrance into town. The bride and her bridal party made up of other girls from her age set performed this as they came into the village.
The dance began with the bride behind her bridal party. The girls are singing the inkondlo wedding song. The party started out in a bent posture and gradually became erect. In some versions, dancers formed 2 files circling outward away from one another and wheeled back across the center to form a line at the end of their movement. This portion of movement was quick and spirited with movements back and forward. The bridal party started the next section of the dance with the bride and her bridesmaids coming out from behind the party. When in front the bride does a solo to complete the first section. The movements in this section were very proper and pleasant.
The Inkondlo itself was a rhyming poem. They used this as the basis for the dance. It was performed as part of the dance. The Masai wedding dance was called a Kayamba; named after the rib-like instrument used in the accompanying music. The young girls of the tribe were the performers in the case of the Masai.
The music used a repetitive melody doubled by a chorus. It was accompanied by a high-pitched bungo horn. Rattles and whistles were minor accompaniments. The Kayamba is one of these rattles; made of wood and reeds with little pebbles on the inside. This music was very dynamic with its many parts. As the young girls danced they added to the music with bells tied to their ankles. This made the dance very rhythmic. The Masai wedding dance would have been more for the entertainment of the wedding party than its Zulu counterpart. The wedding dances of the Masai and Zulu contrast nicely. The Masai dance was very rooted in its music and performed as entertainment for the wedding party. The Zulu dance was a celebratory way of bringing the bride into town that used a simple poem chant. It is interesting to note that these dances were both named after the music used in them; the Zulu after the Inkondlo poem and the Masai after the Kayamba instrument. The Kayamba music was very dynamic and had many parts to it. The Zulu music was very simplistic with its one part chant. Performers of the wedding dances were very different as well. The young girls of the kingdom performed the Masai dance. In contrast, the bridal party performed the Zulu dance.
Coming of Age Dance
Both men and women in Masai and Zulu culture had age sets. To become part of the next age set there were rituals and ceremonies to take part in. Many times those ceremonies included dancing. In Zulu, society women had a very special ceremony, as they became women. The ritual that is most intriguing about Masai initiation comes after the killing of a lion. The Zulu women had a very interesting dance ritual as part of their initiation into womanhood. Part of their initiation was to stay isolated in their hut for a week with only their mother and one friend. After this period they came out and danced. In preparation for their dance they made grass costumes. They weaved together grass to make their outfits that would later be burned after the ceremony. The friends and sisters of the woman being initiated would also participate in the grass, costumed dance. The final ceremony was full of singing and dancing. The woman was officially initiated with her friends and sisters. The final act of the ceremony was the burning of the grass clothing that signaled the step into womanhood. As a part of their initiation into manhood, the Masai were required to go on a lion hunt. When they were successful there was a ceremony that involved the Engilkainoto dance. This dance was performed for the tribe as a celebration of the feat. The lion conquerors picked a female partner to dance with and danced in the middle of a crowd gathered to watch them celebrate. Each couple proceeded through the crowd to the center to dance together. The warriors wore ostrich feathers on their head. They also carried a spear with the paws or tail of the lion attached. Their female partners wore beaded dresses. Besides the fact that these initiation ceremonies were for different sexes there were some other contrasts in the dances performed during them. For one thing, the Zulu dance was done as a group of women as a sort of core instead of being a partner dance like the Masai. Their costumes differed in that the Masai wore their warrior uniforms and decorated their spears with the paws and tail of the lion. The girls in the Zulu dance wore grass outfits that were burned at the end of the ceremony. The girls in the Masai dance wore beaded clothing which was much more permanent.
Ethiopia has a lot of different dances depending on the region. The main dance is called Escista. It is mainly preformed using the shoulder and chest to make rapid movements. Another famous dance is called Gurage, which is different because leg movements are very essential. Gurage uses kick moves that go with the beat that is being played. Another big dance is Tigrenga, this dance requires the participation of a group. The group would make a circle and move in the circle according to the beat. Some people may choose to go in the middle of the circle to preform their own moves. These dances are mostly performed at weddings and holiday gatherings. A conclusion that can be drawn from this, is that the Zulu and Masai use different movements to characterize similar cultural events. Zulu dancers have a very heavy, grounded feeling to their dance while Masai dancers have a very taught and jumpy feel. By using dances about similar aspects of life it is made easier to compare their styles. Although their expressions of life aspects may be different, the things they dance about give us a sense of what is important to them.
Trance State, Dance, and Mayotte Culture
The act of being in a state of trance is by itself most widely and basically defined as any state of altered consciousness or cognizance that differs from ordinary wakeful awareness; in other words, entering a state of trance is achieved when one’s “physical body” becomes partially or completely dormant while the person’s mind stays awake. During this process of entering the trance state, as well as while actually operating in the trance state, the brain wave frequency of the individual in trance actually changes. This change in brain wave frequency is a response to the altered levels of physical and mental activity. Specifically, entering the trance state is characterized by a considerable change or difference from a beta brain-wave state. The human brain is known to have many different brain-wave states which include beta waves, delta waves, theta waves, alpha waves, mu waves, and gamma waves. All of these brain waves are always present in the human mind at all times, however certain waves are more powerful or heightened when engaged in different activities or states of consciousness. For example, beta waves in the brain are associated with wakefulness, consciousness, alertness, activeness, and concentration; so as a result, when one is awake and engaged during the day this brain wave is the strongest and most heightened while the other brain waves are put to the background or periphery. The beta brain wave is put to the background however when an individual enters into the trance state, where at that point the brain’s other frequencies are heightened and moved to the foreground.
The act of entering trance-like states is often times a ceremonial or spiritual practice in which many cultures around the world participate in. Many of these cultures and tribes across the world that participate in trance rituals often use music and, especially, dance as ways of participation in order to enter the trance state; dancing in particular is used by some cultures as a way of entering a trance state, whereas other cultures may dance as a product of being in the trance state. Different cultures across the globe use different methods and different techniques while engaging in trance-inducing rituals, however, one common theme found across many of these cultures who participate in trance rituals is the use of dance. Dance is an integral component of not only trance-inducing rituals, but the trance state itself. In many different ways the process of “trance” can be considered as and included under the categories of both art as well as dance.
One culture in particular where dance and the process of entering into trance states is a major factor in their lives is the people of Mayotte. Mayotte is an archipelago that rests between northeastern Mozambique and northwestern Madagascar. This archipelago is currently a region that is owned and under the influence of France, however many indigenous groups still live and practice their traditional customs on Mayotte. Many of the indigenous peoples living on Mayotte traveled from nearby African countries, including Mozambique and Madagascar, and settled on the various islands of Mayotte. Many of these original people to inhabit Mayotte believe in spirit possession and call upon spirits to possess them through dance and other rituals in order to enter the trance state. No one uniform dance is practiced during the rituals and instead many unique dances are performed by the different people involved; this is because the Mayotte people believe in and call upon many different spirits whom all have different dances associated with them, and in fact participators in the trance state often improvise and create their own dances while “possessed” by these different spirits. Rituals involving dance in Mayotte often involve participation of spectators who clap their hands while participants, possessed by a certain spirit, dance in front of them. These dances can vary from graceful movements to fast rapid dances depending on the type of “spirit” the participator is possessed by; the participators are in the trance state when they are possessed.
Native American Dance
Native American dance has profound and deep spiritual meaning within their culture. A prime example of this would be the mask rituals of the Kwakiutl, a Native American tribe local to Washington state. These rituals bring together song, dance, and storytelling in a fantastic and mystical way. The story's range from story's about the origin of the Kwakiutl, to silly stories meant to scare children into being good. All of these dances are accompanied by chanting and drums, which are made primarily out of cedar and animal skins.
The Ghost Dance was created and performed by Paiute in in the 1890s as a result of the harsh conditions surrounding Native Americans after half a century of dominance by another culture. One direct causes of this was the complete slaughter of buffalo herds throughout the last half of the 19th century. A depletion of their food sources meant that many Native Americans were forced to live and work on reservations carved out of the land by the U.S. government. (Garth Ahern-Hendryx)
Dance, Art or Sport?
In American society, it is sometimes stereotyped as simple, or un-athletic to be a ballerina. Dance is "not a sport" but rather just a form of art. However in many places across the nation, football players are being sent to ballet class to be taught the art of balance, walking/ running through their toes and quick action pivots. Retired Steelers players Lynn Swann and Herschel Walker, along with ex-competitive bodybuilder Governor Arnold Schwarzenegger had at one point incorporated ballet classes into their regular work outs. Dance of all kinds, whether it be modern, jazz, ballet, kick, tap, hip-hop, break dancing, krumping, salsa, waltz, foxtrot and even pole dancing all takes an extreme amount of control and strength and athletes have begun to recognize the benefits. Walker even took it a step further and performed in a show with the Fort Worth Ballet. "Despite having gone through 2-a-day training camps and getting hit repeatedly by massive linebackers, Walker called the ballet performance, 'The hardest thing I've ever done.'” Likewise dancers are training equally hard and as long as many professional athletes. The Southwest Washington Dance Ensembles company dancers rehearse up to 8 hours on Saturdays for shows starting up to 4 months before the opening, along with taking anywhere between 3 to 6 classes a week. While I was performing with the group I remember the very long and hard hours that I spent in the studio and then followed by a long shift working as a waitress. I suppose the biggest differences between dance and athletics is that stadiums do not get sold out for a single performance( the venues are incredibly smaller) and the amount of money dancers receive for their performance is much less. While football players and other professional athletes are getting paid millions of dollars a year, many professional dancers do not receive even close to that amount of money. The field is also much more competitive, as only prima ballerinas get to the lead roles. However, in other cultures such as Russia where the Moscow ballet is a much bigger deal, audiences would much rather pay high prices for a viewing of the FireBird. The lack of interest and in general recognition of the hard work that dancers put into their "sport" is a reflection of the priorities of entertainment of America. When it comes to other cultures, such as Bahia, Brazil, countries do treat dance as a form of art AND a sport. In "Dance Lest We All Fall Down," the story of anthropologist Margaret Wilson's experiences living for a time in Bahia, she discusses and participates in capoeira. Capoeira was first created in Brazil by the slaves brought from Africa. It is said to be a combination of African martial arts and Brazilian dance moves. It is also said that this form of "fighting" was a self-defense mechanism designed by the slaves to look like dance so they wouldn't get in trouble with those in control. Capoeira is similar to what we know as martial arts, only it involves a small group of people who surround the dancers in the middle as they "fight" (without ever making physical contact) to the beats of multiple instruments. The fighting stops when either player is exhausted, another player steps in or the music ends. Roda is another style in capoeira, or a cultural frame of capoeira, where the players form a circle around 2 other capoeiristas who proceed with a simultaneous capoeria battle. Roda illustrates the athletic aspect of the art of capoeira in the rhythmic battle, that only comes to an end when the beat ends or another player takes one capoeirists spot. The circle surrounding capoeiristas is also a tradition in the art and culturally symbolic to challenging oppression in Brazil. These capoeira groups travel around "playing" with different capoeira groups, or in other words competing, and the more modern version has become the National Brazilian Sport, even though it began as a mysterious and ancient form of art. Many could describe capoeira as a form of dance as well which shows that dance can be interpreted as a sport or an art depending on the cultural constructs of each country. It just so happens that here in America, dance is widely known as an art rather than a sport. Yet this does not mean dancers are not athletes.
The cultural practice of painting is an art whose origins date back tens of thousands of years in the form of cave paintings. While cave paintings have been discovered all over the world, some of the earliest examples of this art occur in Africa in the region of Namibia. These paintings, which depict animals painted on stone slabs, have been dated to be nearly 30,000 years old and were speculated to have been done by the San people. Since their discovery in 1969, these paintings were thought to be the earliest known examples of cave art. However, that distinction was lost with the discovery of the Cauvet cave in 1994. The cave, which was happened upon accidentally by potholers in Southern France, contains wall paintings depicting animals from bison, horses, and deer to lions, rhinoceroses, and mammoths. Radiometric dating placed the ages of the earliest of these paintings at approximately 31,000 years old, which clearly places them as the earliest forms of cave art to be discovered so far.
The actual purposes of cave art have been the source of much speculation. In studying the practices of modern tribal societies, some modern scholars have theorized that cave paintings were probably tied into the concepts of religion and magic that were held by the societies of those early painters. However, the precise reason as to why the paintings were created in the first place is still a topic of debate. Whether the paintings were made to bless the efforts of early hunters or were meant to act as a shamanic aid for tapping into the spiritual world, or were created for a wholly different reason is a question that may never be answered. However, the existence of cave paintings themselves reveals that even from earliest times, humans have been interested in being able to depict the objects and environments of the world around them. It is an interest that has continued to be prevalent within human culture across the course of history.
There were a few basic methods that prehistoric people probably used to paint these cave walls. It is theorized that they used sharp tools or spears to etch figures, mostly animals, into the rock. The paint or color that they used to decorate the cave art was most likely used from charcoal, soot, clay, or various types of berries. Basic tools to apply color could have been constructed out of straw, leaves, or hair attached to sticks or reeds. They also might have sprayed on color through hollow reeds or bones in an airbrush type fashion.
Classical to Modern Painting
Throughout time, painting, much like most other art forms, has been used to express emotion, invention, and the change in times. The first known painting was found to be in caves in France around 32,000 years ago. More familiar art work dates ancient Greek, Rome and Renaissance time period. During this time, religion was the main theme of artwork and later began to depict political characters in complex and intricate portraits. Far eastern styles, such as Chinese and Japanese, were also concerned with depicting religion but with different media. While they preferred ink and silk, Western culture began adopting the lightness of watercolors and oils. African art differs greatly from Western art as they had an abundance of functional art. Masks and jewelry were important accessories that were used in ritual ceremonies symbolizing spirits and ancestors. Although murals can be dated as far back to the beginning of artwork, Muralism, or “Muarlismo”, was a movement that brought much attention to Mexican artwork in the 1900’s. The Mexican mural movement was born in the 1920s following the Revolution (1910-1917) and was part of the government's effort to promote its ideology and vision of history. The murals were done in a way to strengthen Mexican identity and artists were commissioned to create images of the cultural history of Mexico and its people. Perhaps inspired by the murals of the 20th century, the urban Graffiti on construction panels on side of the Palacio de Bellas Artes continue to decorate Mexico City.
Also known as street art, graffiti is any two-dimensional symbol or image placed in the public sphere without authorization or commission. It is relatively recent in terms of art, typically involving spray paint, but also employs other kinds of paints, and even decals. Graffiti is illegal and considered vandalism, or destruction of property. While it can be controversial or even obscene, graffiti has also come to serve as a medium for social, political, and economic commentary. With the works of notorious artists such as Banksy gaining worldwide recognition, it has become a global phenomenon. Art has historically been a means of expression through creative transformation, street art and graffiti in particular has gained a reputation for outspoken opinions and a critical eye towards the status quo. Giving a voice to the 'common man', it is readily viewable by hundreds of people on the sides of buildings, train cars, subways and metros, bridges, and more, creating a dialogue without endangering the artist from persecution and arrest, so long as they don't get caught. While a major platform remains the 'tag', a series of letters, symbol(s), or a word that acts as the signature of the artist, there are increasing pictorial images that have garnered attention and redubbed 'graffiti' as 'street art'. Places of great social unrest have some of the most interesting and profound street art, such as Iran, Brazil, Eastern Europe, and the like. Berlin, Germany is home to a historic (in the sense of modern-day graffiti) street art movement during the Soviet reign of East and West division post-WWII that continues today. There are countless forms of so-called graffiti, much as there are many types of other art forms, it can be large or small, explicit or implicit, contentious, engaging, or have no real meaning at all except to the artist who now has a platform to display their work; it has persisted and grown, despite the fear of retribution, and will likely continue to flourish as a new art of the streets.
Sculpture is three-dimensional artwork created by shaping or combining hard and/or plastic material, sound, and/or text and or light, commonly stone (either rock or marble), metal, glass, or wood. Some sculptures are created directly by finding or carving; others are assembled, built together and fired, welded, molded, or cast. They can either be constructed in the round, also known as free-standing, which allows the viewer to walk around the full sculpture and view it from any angle, or as a relief sculpture, in which the forms extend forward but remain attached to a background surface and is meant to be viewed from the front like you would observe a painting. Within these categories there are many sub-fields of low-relief or bas-relief, but as time passes we have witnessed the traditional means of sculpture manipulated and reworked to create the modern sculptures of today. Sculptures are often painted. A person who creates sculptures is called a sculptor. Because sculpture involves the use of materials that can be molded or modulated, it is considered one of the plastic arts. The majority of public art is sculpture. Many sculptures together in a garden setting may be referred to as a sculpture garden.
By definition, media is defined as the mass communication channels through which news, entertainment, educations, data, and promotions are dispersed. This meaning of media has been around since the printing press made it easier to produce large masses of papers to spread news to the public. Today, mass media can be seen as a form of art because there are so many aspects and rules to creating an appropriate message that also must be effective to the public. Media can also be seen as a form of art because it is a form of expression that reaches out to a large sum of people. Media is a less obvious form of art compared to some fine arts such as paintings, drawings, and sculptures, but certain aspects of the media have just as much creativity and effort put into them that make the media a form of art that can be seen in everyday life.
The word photography derives from two ancient Greek words: photo, meaning "light," and graph, meaning "drawing". "Drawing with light" is a way of looking at the term photography. Arguably invented in the 5th century B.C. by Mo Ti, a Chinese philosopher, photography has been a means of creating still images. Mo Ti was able to describe the pinhole camera which is the simplest type. This can be made from black paint, a blank photo, and cardboard. The idea is that with one small pinhole, light can emit to the back of the box to the photo in such a way that reflects the projected scene. Mandé Daguerre is credited for the first printed photograph. His image was processed on a copper plate coated with silver iodide and it printed clear, sharp, and had the potential to be duplicated by others. It was named the daguerrotype. Photography has advanced considerably since then starting in the early 1900's with the discovery of chemical compound that permanently hold the image. This new technology brought with it a new ways of recording historical documents. One of the first examples of this is the photographs of President Abraham Lincoln. Lincoln understood the importance of photography and in 1860 he had his portrait taken by Mathew B. Brady, the most famous professional photographer in the history of American photography. Native Americans in the past have refused to have their photograph taken for fear of losing their soul. In San Juan Chamula, Mexico it is illegal to take photographs in church.
Ceramics is the art of making objects from clay. Clay is a naturally occurring material that is manipulated and decorated to create ceramic art. When dry, clay is similar to a powder, but when mixed with water it becomes a moldable, plastic material which is pinched, rolled, or shaped into forms that are then left to dry into fragile creations. After the clay is completely dried to where it is cold to the touch it must be fired in a kiln at temperatures as high as 2,700 degrees Fahrenheit. This makes its new form permanent, and changes the chemical composition of the clay so it can never be made into the moldable, plastic state again. There are many techniques used by potters to create ceramics. Slab construction (firm and soft), coil, molding in the hands, and throwing on a potters wheel are all means of forming clay into ceramic art. A major requirement for ceramic art is it must be hollow. This is because most ceramics have practical uses such as holding food or liquid, and also because thick pieces of clay, or shapes that are not correctly hollowed and vented, are difficult to dry and fire successfully without exploding in the kiln. After the clay has undergone its first firing, potters often decorate their pieces with glaze, a paint like liquid that contains a variety of minerals mixed with heavily watered down clay. If painted with glaze, the ceramic art must undergo a second firing in the kiln to permanently fuse the glaze to the clay and seal the piece so it is capable of holding liquid.
The word ceramics is from the ancient Greek word, keramakos, and means "of pottery". The earliest known practice of ceramics is dated back to as early as 20,000 years ago in China. This is an art form that has been practiced by nearly every culture we know of. The culture of Pueblo people is showcased in the work of some folk potters in New Mexico. Techniques during the first stage of firing have developed over generations of Pueblo potters that transform the local red clay of New Mexico into burnished black masterpieces of ceramic art. What began as a necessary tool for Pueblo people, allowing them to gather, transport, and store food and water, has become a exquisite art form held highly in the eyes of the international fine art communities.
Television and Film
It is no doubt that the roles of television and film have become more prominent in everyday life as decades have passed and improvements have been made in technology. People tend to watch television and films for entertainment or news purposes, especially since they have become more available and accessible to watch to people around the world. However, they are treated differently in different countries, from a portion of Serbia only being able to watch a certain channel to having 500 channels on every television in almost every home in America. Although television and film have become more common as years have passed, most people do not realize the work and corruption that exists through the media and is being placed in the homes of millions.
Television in America
The average American household has the TV on for an average of 7 hours, 12 minutes per day. This is most likely because 98% of homes in the United States have at least one television set, while the average home has between 2 and 3 televisions. As a nation, we watch 250 billion hours of television annually and almost 50% of Americans admit that they watch TV too often. TV is one of the top advertising agents because it is so common; 30% of TV broadcast time is devoted to advertisement and in a year most children will see 20,000 30 second commercials. 82% of Americans believe that "most of us buy and consume far more than we need."
Children that start watching TV at a very young age are more likely to be unhealthy and obese later in life. It takes away from them going outside and interacting with other kids. This can also result in weight gain due to inactivity and increased snacking. In the span of 30 years (from 1963 to 1993), the percentage of American children ages 6 to 11 who were seriously overweight went from 4.5 to 14.
However, television isn't necessarily all bad. Many viewers, myself included, regard TV as a much-appreciated source of relaxation and tune in to their favorite shows as a means of resting their bodies and recharging their minds after a long day at work or school. TV can also help to meet emotional needs, albeit on a somewhat superficial level, as it often functions as a source of escapism and even catharsis. In short, while I agree that watching too much television can have negative side-effects such as increasing rates of consumption and contributing to childhood obesity, I also believe that, in moderation, it is a perfectly healthy practice that can serve valuable functions in the lives of viewers.
Studies from the University at Buffalo and Miami University of Ohio have shown that television can also help stave off loneliness and rejection. It follows the 'social surrogacy hypothesis', which states that humans can use technologies to provide themselves a false sense of social belonging when there has in fact been no actual social interaction. Connecting with characters can help ease a viewer's need to connect with others, allowing a person to feel as though his/her social needs are being met. The first study found that subjects were less lonely while watching their favorite programs. The second study found that those who connected with the programs on a deeply social level described the programs at further length. The third study found that subjects just thinking about their favorite programs were buffered against drops in self-esteem and increases in negative moods and feelings of rejection. The fourth study found that those who had written about their favorite program (as referenced in the second study) felt fewer feelings of loneliness. The question remains, however, if this 'social surrogacy' actually fulfills social needs or simply suppresses them.
Media and Television
From sitcoms that cover a wider range of materials overtime (such as divorce, mixed race relations, single parents etc.) to questioning the acts of politicians and government acts, media helps define what “legitimate” behavior is. In 1970, 25% of Americans reported getting their political information from the television, by 2005 that number has more than doubled to 70% getting the majority of their information from the television. Today, between 6-8 firms control over 50% of all media coverage. These firms include: Time Warner/AOL, Disney, Bertelsmann, Viacom, News Corp, and Vivendi. This number has changed drastically over the past several decades, in 1981 there were 46 major firms, in 1986 there were 24, in 1990 there were 17, and in 1996 there were 11.
Video and attendance of transnational fiestas
Among the transnational Mixtec community, spanning the United States and Mexico, video has become an important form of communication across the international boundary of the border. Attendance of community fiestas associated with patron saints days, Quinceañeras and weddings is required by close kin, especially god parents. However, for many families crossing the border and traveling many miles is prohibitive to attending these fiestas. Since the late 1980s, video has been increasingly used to allow distant family members to 'participate' in the fiestas from the comfort of their living rooms. In parts of California it is common to see Tias (aunts) and comadres (friends) replaying the videotaped fiestas for years after the event occurred.
Theater is a fine art which incorporates performers, props, settings, and music to exhibit a real or fictional event. It is often performed on a stage but can be displayed in other settings such as a black box, an elevated platform, or even a street corner. It is a popular means of expression that has been practiced since the early days of human civilization. The earliest example of theater can be found in the Greek city state of Athens. It was presented during festivals, religious practices, weddings, politics, etc. as a form of entertainment and news. Theatre has been localized very well in the U.S. with most towns having their own theatres, both professional and volunteer based. National Broadway tours make it to most major cities and most, if not all, high schools and colleges in the nation offer some form of theatre for students.
The works of William Shakespeare have influenced culture in a multitude of ways, from modern reinterpretations of his works to traditional style word for word theater. Shakespeare's plays still have an effect on culture today through linguistics, with phrases such as, "... of Shakespearean proportions" to imply something of large significance, or referring to a lover who refuses to give up as a "Romeo." Modern Shakespearean Theater has a culture of it's own, with the various actors and writers forming a specific sub-culture devoted to the 400 year old works. An excellent example of this is the still operating Shakespearean Theater in Ashland Oregon, where actors and writers have gathered and created a place to express their subculture and love for the art for others. Shakespeare's works can also be seen as argument with his satirical pieces about corrupt governments and failing kings. In a artistic sense this allowed Shakespeare to get away with criticizing politicians of his time, and perhaps helped bring satirical writings into the limelight to make way for later prominent satirical authors.
Improvisational Theater, also known as “Improv,” usually consists of a group or band of “players” who join in improvised exercises or games that involve playing a part of a scene. The nature of Improv is to be spontaneous and in the moment. It is synonymous with organized flexibility. Much like regular theaters, Improvisational Theaters will perform regular shows and performances; highlighting the principle players. However, Improv Theater is unique since there is no set script to be rehearsed and memorized. There may be an outline of where the director wants the show to go, but usually not. Occasionally, music and/or other mixed visuals are added to the exercises. Often, there is a set theme involved for the exercises and/or performances: such as a musical. If a director is necessary for the Improv performance to function, an artistic director will be utilized. Often, that director is a former player or is currently involved in the exercises. The “directors or managers,” tend to work together in collaboration regarding their individual responsibilities for the group. These types of organizations differ from competition-based organizations because the competition-based organizations have a structure and organization goal preset for them. This flexible structure is intriguing to Improv Theater groups because the members can come and go to rehearsals as they please. Rehearsals for Improv groups concentrate more on honing their skills as Improv actors, compared to conventional play rehearsals.
Musical Theater is a popular form of theatrical performance in which the dialogue of the characters are communicated and expressed through spoken word, song, and dance. Although music has been used in theaters for centuries to magnify the audience’s experience, Musical Theater specifically focuses on the integration of dialogue into the song and movement of the performers. Over the course of its existence, Musicals have been related to Operas. A general way to determine the difference however, can be through the delivery of dialogue. Whilst Operas are sung indefinitely, musicals will have an occasional spoken dialogue, dance, and the incorporation of popular genres of music at the time. While musical plays have been being performed since ancient Greece, modern western Musicals have only been performed since the early 20th century.<ref>
The majority of western musicals performed today derive from Greek roots in theater and performance. However, many other forms of musical theater existed to east in Asia such as Chinese, Japanese, and Taiwanese Operas. The first recorded Chinese opera was known as the Canjun Opera and was supposedly performed during the Zhoa Dynasty sometime between 319 AD – 351 AD. Another eastern form of musical theater is Noh. Noh Is the Japanese term for “talent” or “skill” and is used to describe a Japanese musical. It has been performed since the mid-14th century and is still practiced today in specific Noh theaters. Taiwanese Opera or Koa-á-hì is the only known form of drama to emerge from Taiwan as early as the 18th century. Most of the songs are stories and folktales with occasional supernatural elements.
Chapter Glossary of Key Terms
Solitary Play- Children are busy playing by themselves and may not notice other children sitting or playing near them. Impressionism- Term used to describe paintings that looked unfinished because they showed visible brushstrokes. Originated in France in 1860's and they were used to depict the visual impression of the moment. Cubism- Style of art started by Pablo Picasso and Georges Barque in 1907. They took ordinary shapes and broke them up into abstract geometric forms. Realism- Art form that consists of realistic drawings or paintings that replicate an image. Post Impressionism- Represented both an extension of impressionism and a rejection of the style's inherit limitations. Enculturation- Process of becoming a part of a culture. Imitative magic- spiritual or religious attempts to manipulate natural events. Contagious magic- Spell casting, spirit conjuring, and voodoo dolls.
Preschool: an educational system primarily found in the United States where parents can send toddler-aged children to be looked after and taught basic "life skills" (such as socialization and sharing with others) and interact with other toddler-aged children.
Transformational/Representational: Culture guides what is appropriate and what is valuable based on assigned symbolic meaning.
- Child Development Institute. http://www.childdevelopmentinfo.com/development/
- Sylvia Knopp Polgar http://www.jstor.org/stable/3216602?seq=2In
- Edward Norbeck http://icb.oxfordjournals.org/cgi/content/abstract/14/1/267
- Raising Children Network.http://raisingchildren.net.au/articles/sharing.html/context/752
- Why Play. Jim Rice. Sojourners Magazine, January–February 1997 (Vol. 26, No. 1 pp. 24- 27). Features.
- Schultz, Emily A., and Robert H. Lavenda. Cultural Anthropology A Perspective on the Human Condition with free Study Skills Guide on CD-ROM. New York: Oxford UP, USA, 2004.
- Lever, Janet. Soccer Madness Brazil's Passion for the World's Most Popular Sport. New York: Waveland P, 1995.
- 2009. http://www.mapsofworld.com/brazil/sports/soccer.html
- Emmanuele Grossi
- Masa Vukanovich. April 2002. http://www.anthrobase.com/Txt/V/Vukanovich_M_01.htm
- Schultz, Emily A., and Robert H. Lavenda. Cultural Anthropology A Perspective on the Human Condition with free Study Skills Guide on CD-ROM. New York: Oxford UP, USA, 2004.
- (Football in the USA: American Culture and the World’s Game by Peter S. Morris, Nov. 2004)
- The Tropic of Baseball: Baseball in the Dominican Republic. http://books.google.com/books?id=kloGyBSEsRsC&printsec=frontcover#PPP1,M1
- Klein, Alan M. Culture, Politics, and Baseball in the Dominican Republic. http://www.jstor.org/stable/2634143
- Chidester, David 1996The Church of Baseball, the Fetish of Coca-Cola, and the Potlatch of Rock “N” Roll: Theoretical Models for the Study of Religion in American Popular Culture. Journal of the American Academy of Religion 64(4): 743–765. http://www.jstor.org/stable/1465620?seq=5#page_scan_tab_contents
- Brittney Lundberg
- Michael D. Lemonick. http://www.time.com/time/subscriber/covers/1101040607/article/how_we_grew_so_big_diet01a.html
- Laura Heydrich. Exercise Science Major at WWU and ACE Personal Trainer
- David Suprak's lecture for PE 308
- Phil Jackson, Los Angeles Lakers basketball coach
- Thibault, Lucie 2009 Globalization of Sport: An Inconvenient Truth 1. Journal of Sports Management 23(1): 1–20.
- Thibault, Lucie 2009 Globalization of Sport: An Inconvenient Truth 1. Journal of Sports Management 23(1): 1–20.
- Roche, Maurice 2002Megaevents, and Modernity: Olympics and Expos in the Growth of Global Culture. Routledge.
- BBC ON THIS DAY | 17 | 1968: Black Athletes Make Silent Protest N.d. http://news.bbc.co.uk/onthisday/hi/dates/stories/october/17/newsid_3535000/3535348.stm accessed December 12, 2016. http://news.bbc.co.uk/onthisday/hi/dates/stories/october/17/newsid_3535000/3535348.stm
- The Forgotten Story behind the “Black Power” Photo from 1968 Olympics | Toronto Star N.d. https://www.thestar.com/news/insight/2016/08/07/the-forgotten-story-behind-the-black-power-photo-from-1968-olympics.html accessed December 12, 2016. https://www.thestar.com/news/insight/2016/08/07/the-forgotten-story-behind-the-black-power-photo-from-1968-olympics.html
- Schultz, Emily A., and Lavenda, Robert H. Cultural Anthropology A Perspective on the Human Condition. 7th ed. New York: Oxford UP, 2009. Alland, Alexander. The Artistic Animal. New York: Doubleday Anchor, 1977.
- Schultz, Emily A., and Lavenda, Robert H. Cultural Anthropology A Perspective on the Human Condition. 7th ed. New York: Oxford UP, 2009. Alland, Alexander. The Artistic Animal. New York: Doubleday Anchor, 1977.
- Elizabeth Skolmen
- Vitos, Botond 2014 “ An Avatar... in a Physical Space ” : Researching the Mediated Immediacy of Electronic Dance Floors. ResearchGate 6(2): 1–21.
- Montano, Ed 2013 Ethnography From the Inside: Industry-Based Research in the Commercial Sydney EDM Scene. Dancecult: Journal of Electronic Dance Music Culture 5(2): 113–130.
- Hutson, Scott R. 1999 Technoshamanism: Spiritual Healing in the Rave Subculture. ResearchGate 23(3): 53–77.
- Laine, Miranda personal experience from talking to father
- Review of From “Rock Hill” to “Connemara:” The Story before Carl Sandburg 1981 The South Carolina Historical Magazine 82(2): 176–176. http://www.jstor.org/stable/27567685?
- Charlton, Katherine. "Rock Music Styles"
- Jane Marshall http://www.allsands.com/company/contactallsands.htm
- Elizabeth Skolmen. Sandra Jackson-Opoku and Michael West. From Homeland to Township. http://www.worldandi.com/public/1994/april/cl1.cfm
- Kenneth B. Noble, New York Times, Aug. 23rd 1992, http://www.nytimes.com/1992/08/23/arts/many-accents-rap-around-world-west-africa-king-yields-new-messenger.html
- The real world
- "Hearing Focuses on Language and Violence in Rap Music" by Jeff Leeds
- Codrington, Raymond One Planet under a Groove: Hip-Hop and Contemporary Art. The Bronx Museum of the Arts, Bronx, NY. October 26, 2001—March 3, 2002; Walker Art Center, Minneapolis, MN, July 14-October 13, 2002; Spelman College Museum of Fine Art, Atlanta, GA, Spring 2003.http://www3.interscience.wiley.com/cgi-bin/fulltext?ID=120130073&PLACEBO=IE.pdf&mode=pdf
- "Democracy in Dakar" Documentary
- Condry, Ian. http://www.newglobalhistory.com/docs/japanese_hip-hop.pdf
- McKenzie Chambers, personal experience singing and performing in choir
- http://www.yeshiva.org.il. Bet El Yeshiva Center. http://www.yeshiva.org.il/midrash/Shiur.asp?id=2262. Retrieved on 3 January 2009.
- Shircago, Jewish A Cappella and Sefirat Omer.
- Mellie Leandicho Lopez, http://books.google.com/books?hl=en&lr=&id=jGssp-oJrT8C&oi=fnd&pg=PR9&ots=AkIE1UuF_W&sig=3HgNdiP8xcqu01BDDBJxoIKw1q4
- Cho, Younghan. "Desperately Seeking East Asia Amidst The Popularity Of South Korean Pop Culture In Asia." Cultural Studies 25.3 (2011): 383-404. Academic Search Complete. Web. 30 Nov. 2016.
- Mitchell, Christopher. "J. R. R. Tolkien: Father of Modern Fantasy Literature" (Google Video). "Let There Be Light" series. University of California Television. http://video.google.com/videoplay?docid=8119893978710705002. Retrieved 2006-07-20. .
- Personal experience,<http://en.wikipedia.org/wiki/Monomyth>.
- "All World Knowledge - JRR Tolkien." All World Knowledge: Educational articles on everything and more. 28 Apr. 2009 <http://www.allworldknowledge.com/tolkien/index.html>.
- "Homer and Plato." LotsofEssays. 26 Apr. 2009.
- Suppa, Sergio O. "ToTango." http://www.totango.net/sergio.html
- Pacheaco, Carly. Western Washington University
- Cohen, Selma Jean. International encyclopedia of dance. Oxford University Press. Oxford, 1998. V5 p. 643-648.
- Zantzinger, G. Dances of Southern Africa. Pennsylvania State University. 1973. 55 min., color. "Maasai." New World Encyclopedia. 3 Apr 2008, 22:43 UTC. 3 Dec 2008, 21:07 <http://www.newworldencyclopedia.org/entry/Maasai?oldid=686829>.
- Finke, Jens. Traditional Music & Cultures of Kenya. Copyright 2000-2007. kenyabluegecko.org Magogo, Constance, Princess. Interview with: Rycroft, David. British Library Archival Sound Recordings. 1964. http://www.uwgb.edu/ogradyt/world/African.htm. University of Wisconsin-Green Bay Cross-Cultural
- Communication: World Music 242-329 Sub-Saharan African Music. 12-11-01. http://library.thinkquest.org/06aug/00933/RSAceremonies.htm. 04-02-07.
- Ritter, E.A. Shaka Zulu. Great Britain: Penguin Books, 1955. Pg. 35-57, 101.
- Saitoti,Tepilit Ole. The Worlds of a Maasai Warrior: An Autobiography. University of California Press, 1986.
- McQuail, Lisa. The Masai of Africa. Lerner Publications, 2002.
- Shillington, Kevin. History of Africa. Macmillan Publishers, 1989, 1995, 2005. Pg. 257-260, 207-208.
- Coufal, Leonard. “the Mfecane and Southern Africa”. Western Washington University.
- Coufal, Leonard. “East Africa”. Western Washington University.
- Clayman, Andrew http://www.signatureforum.com/article.cfm?articleid=49
- Willson, Margaret. Dance Lest We All Fall Down. Cold Tree Press. 2007. Pg. 17-18.
- Smith, M. The Cultural Frame: Roda. HCS Visual Arts. 2008.
- Peter J. Ucko and Andree Rosenfeld, Science Magazine. http://www.sciencemag.org/cgi/reprint/161/3837/150-a
- A Treatise on Painting by Leonardo Da Vinci (Kessinger Publishing)
- Richard Tansev Gardner's Art Throughout the Ages
- Getlein, Mark. Living With Art. 10th ed. New York: McGraw Hill, 2013. Print
- Getlein, Mark. Living With Art. 10th ed. New York: McGraw Hill, 2013. Print.
- Shaynne Costello, Graphic Design major and Art History Minor at WWU.
- Getlein, Mark. Living With Art. 10th ed. New York: McGraw Hill, 2013. Print
- Getlein, Mark. Living With Art. 10th ed. New York: McGraw Hill, 2013. Print
- wikipedia Television content rating system
- TV-Free America http://members.iquest.net/~macihms/HomeEd/tvfacts.html
- The Sourcebook for Teaching Science. http://www.csun.edu/science/health/docs/tv&health.html)
- TV-Free America http://members.iquest.net/~macihms/HomeEd/tvfacts.html
- TV-Free America http://members.iquest.net/~macihms/HomeEd/tvfacts.html
- For some, TV cures loneliness http://web.archive.org/web/20090501191755/news.yahoo.com/s/nm/20090428/tv_nm/us_loneliness
- Todd Donovan, Western Washington University Political Science Professor
- Paul James personal communication. April 2009.
|
How to construct the incenter using a compass and straightedge.
How to construct the orthocenter of a triangle.
How to convert the rectangular coordinates of a point into polar coordinates.
How to sketch a simple polar curve by plotting points.
How to evaluate the limit of a function at points where it is not continuous.
How to define the derivative of a function at a point x=a.
How to find the shortest distance between a point and a line.
Understanding the colligative property of solutions that elevates boiling point.
How to find the slope between two points.
How to graph the inverse of a function using points.
How to write the parametric equations of a line segment that goes from point A to point B.
How to graph y = f(x ? h) when f(x) = sqrt(x), and what kind of transformation to expect.
How to construct the circumcenter using a compass and straightedge.
How to define rotational symmetry and identify the degree of rotational symmetry of common regular polygons.
How to identify the graph of a stretched cosine curve.
How to understand what vectors are, and how they can be represented geometrically.
How to derive the formula for the midpoint of a segment in three dimensions.
How graph the cotangent function using five key points.
|
In a nearby stellar nursery called the Orion Nebula, young, massive stars are blasting far-ultraviolet light at the cloud of dust and gas from which they were born. This intense flood of radiation is violently disrupting the cloud by breaking apart molecules, ionizing atoms and molecules by stripping their electrons, and heating the gas and dust. An international team using NASA’s James Webb Space Telescope, which is scheduled to launch in October, will study a portion of the radiated cloud called the Orion Bar to learn more about the influence massive stars have on their environments, and even on the formation of our own solar system.
“The fact that massive stars shape the structure of galaxies through their explosions as supernovas has been known for a long time. But what people have discovered more recently is that massive stars also influence their environments not only as supernovas, but through their winds and radiation during their lives,” said one of the team’s principal investigators, Olivier Berné, a research scientist at the French National Centre for Scientific Research in Toulouse.
Why the Orion Bar?
While it might sound like a Friday-night watering hole, the Orion Bar is actually a ridge-like feature of gas and dust within the spectacular Orion Nebula. A little more than 1,300 light-years away, this nebula is the nearest region of massive star formation to the Sun. The Orion Bar is sculpted by the intense radiation from nearby, hot, young stars, and at first glance appears to be shaped like a bar. It is a “photodissociation region,” or PDR, where ultraviolet light from young, massive stars creates a mostly neutral, but warm, area of gas and dust between the fully ionized gas surrounding the massive stars and the clouds in which they are born. This ultraviolet radiation strongly influences the gas chemistry of these regions and acts as the most important source of heat.
PDRs occur where interstellar gas is dense and cold enough to remain neutral, but not dense enough to prevent the penetration of far-ultraviolet light from massive stars. Emissions from these regions provide a unique tool to study the physical and chemical processes that are important for most of the mass between and around stars. The processes of radiation and cloud disruption drive the evolution of interstellar matter in our galaxy and throughout the universe from the early era of vigorous star formation to the present day.
“The Orion Bar is probably the prototype of a PDR,” explained Els Peeters, another of the team’s principal investigators. Peeters is a professor at the University of Western Ontario and a member of the SETI Institute. “It’s been studied extensively, so it’s well characterized. It’s very close by, and it’s really seen edge on. That means you can probe the different transition regions. And since it’s close by, this transition from one region to another is spatially distinct if you have a telescope with high spatial resolution.”
The Orion Bar is representative of what scientists think were the harsh physical conditions of PDRs in the universe billions of years ago. “We believe that at this time, you had ‘Orion Nebulas’ everywhere in the universe, in many galaxies,” said Berné. “We think that it can be representative of the physical conditions in terms of the ultraviolet radiation field in what are called ‘starburst galaxies,’ which dominate the era of star formation, when the universe was about half its current age.”
The formation of planetary systems in interstellar regions irradiated by massive young stars remains an open question. Detailed observations would allow astronomers to understand the impact of the ultraviolet radiation on the mass and composition of newly formed stars and planets.
In particular, studies of meteorites suggest that the solar system formed in a region similar to the Orion Nebula. Observing the Orion Bar is a way to understand our past. It serves as a model to learn about the very early stages of the formation of the solar system.
Like a Layer Cake in Space
PDRs were long thought to be homogenous regions of warm gas and dust. Now scientists know they are greatly stratified, like a layer cake. In reality, the Orion Bar is not really a “bar” at all. Instead, it contains a lot of structure and four distinct zones. These are:
- The molecular zone, a cold and dense region where the gas is in the form of molecules and where stars could form;
- The dissociation front, where the molecules break apart into atoms as the temperature rises;
- The ionization front, where the gas is stripped of electrons, becoming ionized, as the temperature increases dramatically;
- The fully ionized flow of gas into a region of atomic, ionized hydrogen.
“With Webb, we will be able to separate and study the different regions’ physical conditions, which are completely different,” said Emilie Habart, another of the team’s principal investigators. Habart is a scientist with the French Institute of Space Astrophysics and a senior lecturer at Paris-Saclay University. “We will study the passage from very hot regions to very cold ones. This is the first time we will be able to do that.”
The phenomenon of these zones is much like what happens with heat from a fireplace. As you move away from the fire, the temperature drops. Similarly, the radiation field changes with distance from a massive star. In the same way, the composition of the material changes at different distances from that star. With Webb, scientists for the first time will resolve each individual region within that layered structure in the infrared and characterize it completely.
Paving the Way for Future Observations
These observations will be part of the Director’s Discretionary-Early Release Science program, which provides observing time to selected projects early in the telescope’s mission. This program allows the astronomical community to quickly learn how best to use Webb’s capabilities, while also yielding robust science.
One goal of the Orion Bar work is to identify the characteristics that will serve as a “template” for future studies of more distant PDRs. At greater distances, the different zones might blur together. Information from the Orion Bar will be useful for interpreting that data. The Orion Bar observations will be available to the wider science community very soon after their collection.
“Most of the light that we receive from very distant galaxies is coming from ‘Orion Nebulas’ situated in these galaxies,” explained Berné. “So it makes a lot of sense to observe in great detail the Orion Nebula that is near us in order to then understand the emissions coming from these very distant galaxies that contain many Orion-like regions in them.”
Only Possible with Webb
With its location in space, infrared capability, sensitivity, and spatial resolution, Webb provides a unique opportunity to study the Orion Bar. The team will probe this region using Webb’s cameras and spectrographs.
“It’s really the first time that we have such good wavelength coverage and angular resolution,” said Berné. “We’re very interested in spectroscopy because that’s where you see all the ‘fingerprints’ that give you the detailed information on the physical conditions. But we also want the images to see the structure and organization of matter. When you combine the spectroscopy and the imaging in this unique infrared range, you get all the information you need to do the science we’re interested in.”
The study includes a core team of 20 members but also a large, international, interdisciplinary team of more than 100 scientists from 18 countries. The group includes astronomers, physicists, chemists, theoreticians, and experimentalists.
The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.
For more information about Webb, visit www.nasa.gov/webb.
NASA’s Goddard Space Flight Center, Greenbelt, Md.
|
(1) (a) and (b)
(2) (a) and (c)
(3) (a) (b) and (c)
oxidation is gain of oxygen and reduction reaction is released of oxygen
(a) and (b)
QUESTION 10 Fe2O3 + 2Al -> AL2O3+ 2Fe
The above reaction is an example of a
Solution: (d) displacement reaction
QUESTION 11 What happens when dilute hydrochloric acid is added to iron fillings? Tick the correct Solution.
(a) Hydrogen gas and iron chloride are produced.
(b) Chlorine gas and iron hydrochloric are produced.
(c) No reaction takes place.
(d) Iron salt and water are produced.
Solution: (a) Hydrogen gas and iron chloride are produced.
What is a balance chemical equation? Why chemical equation should is balanced?
Solution: A chemical equation is balanced when the number of atoms of each type involved in a chemical reaction are same on both the reactant and product sides of the equation.
According to the law of conservation of mass, when a chemical reaction occurs, the mass of the products should be equal to the mass of the reactants. Therefore, the amount of the atoms in each element does not change in the chemical reaction. As a result, the chemical equation that shows the chemical reaction needs to be balanced. A balanced chemical equation occurs when the number of the atoms involved in the reactants side is equal to the number
QUESTION 13 Translate the following statements into chemical equations and then balance them.
(a) Hydrogen gas combines with nitrogen to form ammonia
(b) Hydrogen sulphide gas burns in air to give water and sulphur dioxide.
(c) Barium chloride reacts with aluminum sulphate to give aluminum chloride and a precipitate of barium sulphate
(d) Potassium metal reacts with water to give potassium hydroxide and hydrogen gas.
How to solve these type of question
(a) 3H2 +N2 -> 2NH3
(b) 2H2S + 3O2 -> 2H2O + 2SO2
(c) 3BaCl2 + Al2 (SO4)3 -> 2AlCl3 + 3BaSO4
(d) 2K + 2H2O -> 2KOH + H2
QUESTION 14. Balance the following chemical equation.
(a) HNO3 + Ca(OH)2 -> Ca(NO3)2 +H2O
(b) NaOH + H2SO4 -> Na@SO4 + H2O
(c) NaCl + AgNO4 -> AgCl + NaNO#
(d) BaCl2 + H2SO4 -> BaSO4 + HCl
How to solve these type of question
a) 2HNO3 + Ca(OH)2 -> Ca(NO3)2 + 2H2O
(b) 2NaOH + H 2SO4 -> Na2SO 4 + @H 2O
(c) NaCl + AgNO 3 -> AgCl + NaNO 3
(d) BaCl 2 + H 2SO 4 -> BaSO 4 + 2HCl
QUESTION 15: Write the balance chemical equation for the following reaction.
(a) Calcium hydroxide +Carbon dioxide -à Calcium carbonate + Water
(b) Zinc + Silver Nitrate -à Zinc Nitratew + Silver
(c) Aluminium +Copper chloride -à Aluminium Chloride + Copper
(d) Barium chloride + Potassium sulphate -à Barium Sulphate + Potassium chloride.
How to solve these type of question
(a) Ca(OH02 + CO 2 -> CaCo 3 +H 2O
(b) Zn + 2AgNO 3 -> Zn(NO 3) 3 + 2Ag
(c) 2Al + 3CuCl 2 -> 2AlCl 3 + 3Cu
(d) BaCl 2 +K 2SO 4 -> BaSO 4 +2KCl
QUESTION 16 Write the balance chemical equation for the following and identify the type of reaction in each case.
(a) Potassium bromide (aq) + Barium iodide(aq) -à Potassium iodide(aq) + Barium bromide(s)
(b) Zinc carbonate(s) -à Zinc oxide(s) + Carbon dioxide(g)
(c) Hydrogen (g) + chlorine(g) -à Hydrogen chloride(g)
(d) Magnesium (s) + Hydrochloric acid (aq) -à Magnesium chloride(aq) +Hydrogen (g)
Solution: (a) 2KBr(aq) + BaL2 -> 2KL(aq) + BaBr 2(s), Double displacement reaction.
(b) ZnCO 3 (s) -> ZnO(s) + CO 2 Decomposition reaction.
( c) H 2(g) +CL 2 (g) -> 2HCL(g) Combination reaction.
QUESTION 17 What does one mean by exothermic and endothermic reaction? Give examples.
Reaction in which heat is released along with the formation of product are called exothermic reaction. Example of exothermic reaction are-
Burning natural gas : CH 4 +2O 2 (g) -> CO 2(g) + 2H 2O (g)
Reaction in which energy is absorbed are known as endothermic reaction. Example of endothermic reaction are :
2AgBr(s) sun light- -> 2Ag(s) +Br 2 (g)
The key point is heat is absorbed in the process. Endo means energy is absorbed
QUESTION 18: Why is respiration considered an exothermic reaction ? explain.
The steps to respiration are
Step 1 :Food that we eat includes carbohydrates, proteins etc.
Step 2: During digestion , carbohydrates are broken down in simpler substances called glucose.
Step 3: Glucose combines with oxygen in the cells of our body to provide energy.
This reaction is called respiration. Since energy is releases during this process, respiration is an exothermic reaction.
C 6H 12O 6 (aq) + 6O 2(g) -> 6CO 2(aq) +6H 2O(i) + Energy
QUESTION 19: Why are decomposition reactions called the opposite of combination reaction ? Write equation for these reactions.
When a single substance decomposes to give two or more substances .It is called decomposition reaction
The generalized reaction for chemical decomposition is:
Decomposition reaction: AB + Energy -> A+ B
Example: CaCO 3(s)Heat-- -> CaO(s) + CO 2(g)
When reaction two or more substances combine to form a new single substance
Combination reaction: A +B -> AB + energy
Example: Burning of coal: C(s) + O 2(g) --à CO 2(g)
So it is clear that Decomposition reaction are opposite of combination reactions.
QUESTION 20: Write one equation each for decomposition reaction where energy is supplied in the form of heat , light or electricity.
We already know that When a single substance decomposes to give two or more substances .It is called decomposition reaction
So examples are
CaCO 3(s) - -> CaO(s) + CO 2(g)
2AgBr(s) -> 2Ag(s) + Br 2(g)
2H 2O(I) -> 2H 2 + O 2(g)
QUESTION 21: What is the difference between displacement and double displacement reaction ? Write equation for these reactions.
A displacement reaction
It is chemical reaction in which a more reactive element displaces a less reactive element from its salt solution.
Example: Fe + CuSO 4 - -> FeSO 4 +Cu
In this reaction , one displacement is taking place. Fe is displacing Cu
Double displacement reaction
It is a chemical reaction in which there is an exchange of ions between the reactants to give new substances. There are two displacements taking place in a double displacement reaction.
Example: 3BaCL 2 + AL 2 (SO 4) 3 - -> 2ALCl 3 +3BaSO 4
In these reaction two displacements is taking place. Ba is displacing AL and AL is displacing Ba
QUESTION 22: In refining of silver, the recovery of silver from silver nitrate solution involved displacement by copper metal. Write down the reactions involved.
2 agNO 3(aq) + Cu(s) -> Cu(NO 3) 2 + 2Ag(s)
QUESTION 23 What do mean by precipitation reaction? Explain by giving example.
Important points are
1)Any reaction that produces a precitate can be called a precipitation reaction.
2) Precipitation reaction produce insoluble salts which settle down as precipitate.
For example : When aqueous sodium sulphate solution and aqueous barium chloride are reacted, aqueous solution of sodium chloride and white precipitate of barium sulphate are formed.
Na 2SO 4(aq) + BaCL 2(aq) -> BaSO 4(s) + 2NaCL(aq)
QUESTION 23: Explain the following in terms of gain or loss of oxygen with two examples each.
Oxidation is the gain of oxygen.
Example 1: ZnO + C -> Zn + CO
In this reaction , ZnO is getting reduced to Zn and C is getting oxidized to CO. Therefore the conversion of ZnO is reduction and the conversion of C to CO is oxidation.
Reduction is the loss of oxygen.
Example2: CuO +H 2 -> Cu + H 2O
IN this reaction , CuO is getting to Cu and H 2 is getting oxidized to H 2O. Therefore, the conversion of CuO is reduction and the conversion of H 2 to H 2O is oxidation.
QUESTION 24: A shiny brown coloured element “X” on heating in air becomes black in colour. Name the element “X” and the black coloured compound formed.
Solution: The element “X” is copper and the black coloured compound is copper oxide (CuO). The chemical reaction is :
2Cu + O 2 - -> 2CuO
QUESTION 25: Why we do apply paint on iron articles?
Iron react with air and moisture and corrode .So we apply paint on iron articles to avoid rusting of iron. Paint will stop the contact of air and moisture with iron.
QUESTION 26: Oil and fat containing food items are flushed with nitrogen. Why?
Solution: When fats and oil are oxidized , they become rancid and their smell and taste change. Food items containing fat and oil are flushed with nitrogen to prevent rancidity of oil and fat as nitrogen is an inert gas and prevent the oxidation of oil and fats.
QUESTION 27: Explain the following terms with one example each.
|
Can you find an efficient method to work out how many handshakes
there would be if hundreds of people met?
The diagram illustrates the formula: 1 + 3 + 5 + ... + (2n - 1) = n² Use the diagram to show that any odd number is the difference of two squares.
A square of area 40 square cms is inscribed in a semicircle. Find
the area of the square that could be inscribed in a circle of the
If a sum invested gains 10% each year how long before it has
doubled its value?
A 1 metre cube has one face on the ground and one face against a
wall. A 4 metre ladder leans against the wall and just touches the
cube. How high is the top of the ladder above the ground?
Four bags contain a large number of 1s, 3s, 5s and 7s. Pick any ten numbers from the bags above so that their total is 37.
Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way?
On the graph there are 28 marked points. These points all mark the
vertices (corners) of eight hidden squares. Can you find the eight
If the hypotenuse (base) length is 100cm and if an extra line
splits the base into 36cm and 64cm parts, what were the side
lengths for the original right-angled triangle?
Choose four consecutive whole numbers. Multiply the first and last numbers together. Multiply the middle pair together. What do you notice?
What is the same and what is different about these circle
questions? What connections can you make?
Square numbers can be represented as the sum of consecutive odd
numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153?
How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results?
Explore when it is possible to construct a circle which just
touches all four sides of a quadrilateral.
Explore the effect of reflecting in two parallel mirror lines.
Many numbers can be expressed as the difference of two perfect squares. What do you notice about the numbers you CANNOT make?
A spider is sitting in the middle of one of the smallest walls in a
room and a fly is resting beside the window. What is the shortest
distance the spider would have to crawl to catch the fly?
Imagine a large cube made from small red cubes being dropped into a
pot of yellow paint. How many of the small cubes will have yellow
paint on their faces?
What size square corners should be cut from a square piece of paper to make a box with the largest possible volume?
Start with two numbers and generate a sequence where the next number is the mean of the last two numbers...
Can you maximise the area available to a grazing goat?
Have a go at creating these images based on circles. What do you notice about the areas of the different sections?
Here are four tiles. They can be arranged in a 2 by 2 square so that this large square has a green edge. If the tiles are moved around, we can make a 2 by 2 square with a blue edge... Now try to. . . .
What is the greatest volume you can get for a rectangular (cuboid)
parcel if the maximum combined length and girth are 2 metres?
What does this number mean ? Which order of 1, 2, 3 and 4 makes the
highest value ? Which makes the lowest ?
All CD Heaven stores were given the same number of a popular CD to
sell for £24. In their two week sale each store reduces the
price of the CD by 25% ... How many CDs did the store sell at. . . .
Which has the greatest area, a circle or a square inscribed in an
isosceles, right angle triangle?
Imagine you have a large supply of 3kg and 8kg weights. How many of each weight would you need for the average (mean) of the weights to be 6kg? What other averages could you have?
Two ladders are propped up against facing walls. The end of the
first ladder is 10 metres above the foot of the first wall. The end
of the second ladder is 5 metres above the foot of the second. . . .
This shape comprises four semi-circles. What is the relationship
between the area of the shaded region and the area of the circle on
AB as diameter?
Can you find the area of a parallelogram defined by two vectors?
There is a particular value of x, and a value of y to go with it,
which make all five expressions equal in value, can you find that
x, y pair ?
A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle
contains 20 squares. What size rectangle(s) contain(s) exactly 100
squares? Can you find them all?
An aluminium can contains 330 ml of cola. If the can's diameter is
6 cm what is the can's height?
The diagonals of a trapezium divide it into four parts. Can you
create a trapezium where three of those parts are equal in area?
Investigate how you can work out what day of the week your birthday will be on next year, and the year after...
Substitute -1, -2 or -3, into an algebraic expression and you'll
get three results. Is it possible to tell in advance which of those
three will be the largest ?
Manufacturers need to minimise the amount of material used to make
their product. What is the best cross-section for a gutter?
A plastic funnel is used to pour liquids through narrow apertures.
What shape funnel would use the least amount of plastic to
manufacture for any specific volume ?
Show that is it impossible to have a tetrahedron whose six edges
have lengths 10, 20, 30, 40, 50 and 60 units...
Is there a relationship between the coordinates of the endpoints of a line and the number of grid squares it crosses?
Can you work out the dimensions of the three cubes?
A circle of radius r touches two sides of a right angled triangle,
sides x and y, and has its centre on the hypotenuse. Can you prove
the formula linking x, y and r?
What angle is needed for a ball to do a circuit of the billiard
table and then pass through its original position?
A hexagon, with sides alternately a and b units in length, is inscribed in a circle. How big is the radius of the circle?
Explore the effect of combining enlargements.
How many different symmetrical shapes can you make by shading triangles or squares?
Why does this fold create an angle of sixty degrees?
There are lots of different methods to find out what the shapes are worth - how many can you find?
The area of a square inscribed in a circle with a unit radius is,
satisfyingly, 2. What is the area of a regular hexagon inscribed in
a circle with a unit radius?
|
HTML uses four characters of the ASCII character set in the marking up of its text. We have currently only used two but the complete set is as follows:
- the left angle bracket - used to denote the beginning of an HTML code sequence.
- the right angle bracket - used to denote the end of an HTML code sequence.
- the ampersand - used to signify a special character is to be produced.
- the double quote - used to reference external information, for example documents that are external to the current document.
These all have a special meaning within HTML and therefore cannot be used "as is" in HTML documents.
To use one of these characters in an HTML document, you must enter the following text instead:
- the sequence for <
- the sequence for >
- the sequence for &
- the sequence for "
Additional HTML supports certain accented characters. For example:
- the sequence for a lowercase o with an umlaut: ö
- the sequence for a lowercase n with an tilde: ñ
- the sequence for an uppercase E with a grave accent: È
Note: Unlike the rest of HTML, the above sequences are case sensitive. You cannot, for instance, use < instead of <.
|
The replacement property says that if x = y in a real equation with y you can replace y with x and still have a real equation.
The substitution property of equality, one of the eight properties of equality, states that if x = y, then x in any equation can be replaced by y and y in n 'of any equation can be replaced by x.
Solution: Since the two angles 1 and 3 are congruent with the same angle, angle 2, they must be congruent. Since we can only substitute equations in equations, we have NO congruent substitution properties.
Replacement properties: if two geometric objects (segments, angles, triangles or other) are congruent and you have a declaration regarding one of them, you can activate the switch and replace it with the other.
If x = y, y can be replaced by x in any expression. The replacement property is more general than the transitive property because we can replace x with y not only in y = z, but in any expression. In other words, the transitive property is just an instance where the replacement property can be used.
Replacement method The replacement method can be used in four steps. Solve one of the equations for x = or y =. Insert the solution from step 1 into the second equation. Solve this new equation. Solve the second variable. Step 1: Solve one of the equations for x = or y =.
The first is that when the corresponding angles, the angles on the same angle at each intersection, are the same, the lines are parallel. The second is that when the internal angles alternating, the angles that are on both sides of the cross section and within the parallel lines are the same, the lines are parallel.
The transitive property of congruence says that two objects that are congruent with a third object are also congruent with each other. If giraffes have a high neck and Melman from Madagascar is a giraffe, then Melman has a long neck. This is the transitive property of the orbit: if a = b and b = c, then a = c.
Geometry Evidence Strategies Create a game plan. Create numbers for the segments and corners. Look for congruent triangles (and remember CPCTC). Try to find isosceles triangles. Look for parallel lines. Find rays and draw more rays. Use all dispensers. Check your logic.
Follow these steps to solve systems by substitution: Choose an equation and solve one of its variables. In the second equation, you can substitute the newly solved variable. Solve the new equation. Replace the value found in an equation with the two variables and solve for the other variable.
Mathematical Definition of Substitution: Substitution A strategy for solving systems of equations in which one variable is solved and that solution is used to find the other variable. Subjects: mathematics. Topic: Algebra 2.
Postulate of division The whole is equal to the sum of its parts. Postulate of addition If equal amounts are added, the sums are the same. Transitive property If a = b and b = c, then a = c Reflexive property A set is congruent (equal) to itself A = a symmetric property If a = b, then b = a.
When two lines intersect to form an X, the angles on either side of X are called vertical angles. These angles are the same, and here is the official sentence that says it. Vertical angles are congruent: if two angles are vertical, then they are congruent (see illustration above).
The equality division property says that if you divide both sides of an equation by the same zero number, the sides will remain the same. What is the subtraction property of equality?
The subtraction property of equality tells us that if we subtract one side of an equation, we must also subtract the other side of the equation for the equation to remain the same. So it is with the equations. To keep them the same, do the same on both sides of the equation.
|
This action might not be possible to undo. Are you sure you want to continue?
Hubbert peak theory
The Hubbert peak theory says that for any given geographical area, from an individual oil-producing region to the planet as a whole, the rate of petroleum production tends to follow a bell-shaped curve. It is one of the primary theories on peak oil. Choosing a particular curve determines a point of maximum production based on discovery rates, production rates and cumulative production. Early in the curve (pre-peak), the production rate increases because of the discovery rate and the addition of infrastructure. Late in the curve (post-peak), production declines because of resource depletion.
2004 U.S. government predictions for oil production other than in OPEC and the former Soviet Union
The Hubbert peak theory is based on the observation that the amount of oil under the ground in any region is finite, therefore the rate of discovery which initially increases quickly must reach a maximum and decline. In the US, oil extraction followed the discovery curve after a time lag of 32 to 35 years. The theory is named after American geophysicist M. King Hubbert, who created a method of modeling the production curve given an assumed ultimate recovery volume.
"Hubbert's peak" can refer to the peaking of production of a particular area, which has now been observed for many fields and regions. Hubbert's Peak was achieved in the continental US in the early 1970s. Oil production peaked at 10,200,000 barrels per day (1,620,000 m3/d). Since then, it has been in a gradual decline. Peak oil as a proper noun, or "Hubbert's peak" applied more generally, refers to a singular event in history: the peak of the entire planet's oil production. After Peak Oil, according to the Hubbert Peak Theory, the rate of oil production on Earth would enter a terminal decline. On the basis of his theory, in a paper he presented to the American Petroleum Institute in 1956, Hubbert correctly predicted that production of oil from conventional sources would peak in the continental United States around 1965-1970. Hubbert further predicted a worldwide peak at "about half a century" from publication and approximately 12 gigabarrels (GB) a year in magnitude. In a 1976 TV interview Hubbert added that the actions of OPEC might flatten the global production curve but this would only delay the peak for perhaps 10 years.
Hubbert peak theory
In 1956, Hubbert proposed that fossil fuel production in a given region over time would follow a roughly bell-shaped curve without giving a precise formula; he later used the Hubbert curve, the derivative of the logistic curve, for estimating future production using past observed discoveries. Hubbert assumed that after fossil fuel reserves (oil reserves, coal reserves, and natural gas reserves) are discovered, production at first increases approximately exponentially, as more extraction commences and more efficient facilities are installed. At some point, a peak output is reached, and production begins declining until it approximates an exponential decline. The Hubbert curve satisfies these constraints. Furthermore, it is roughly symmetrical, with the peak of production reached when about half of the fossil fuel that will ultimately be produced has been produced. It also has a single peak. Given past oil discovery and production data, a Hubbert curve that attempts to approximate past discovery data may be constructed and used to provide estimates for future production. In particular, the date of peak oil production or the total amount of oil ultimately produced can be estimated that way. Cavallo defines the Hubbert curve used to predict the U.S. peak as the derivative of:
The standard Hubbert curve. For applications, the x and y scales are replaced by time and production scales.
is the total resource available (ultimate recovery of the cumulative production, and and are
U.S. Oil Production and Imports 1920 to 2005
constants. The year of maximum annual production (peak) is:
Use of multiple curves
The sum of multiple Hubbert curves, a technique not developed by Hubbert himself, may be used in order to model more complicated real life scenarios.
Definition of reserves
Norway's oil production and a Hubbert curve approximating it.
Almost all of Hubbert peaks must be put in the context of high ore grade. Except for fissionable materials, any resource, including oil, is theoretically recoverable from the environment with the right technology. In contrast, Hubbert was concerned with "easy" oil, "easy" metals, and so forth that could be recovered without greatly advanced mining efforts and how to time the necessity of such resource acquisition advancements or substitutions by knowing an "easy" resource's probable peak. Also, as reserves become more difficult to extract there is the possibility that mining or alternatives are too expensive for developing countries.
Hubbert peak theory For heavy crude or deep water drilling attempts, such as Noxal oil field or tar sands or oil shale, the price of the oil extracted will have to include the extra effort required to mine these resources. According to the U.S. Bureau of Ocean Energy Management, Regulation and Enforcement (formerly, the Minerals Management Service), areas such as the Outer Continental Shelf may also incur higher costs due to environmental concerns. So not all oil reserves are equal, and the more difficult reserves are predicted by Hubbert as being typical of the post-peak side of the Hubbert curve.
Hubbert, in his 1956 paper, presented two scenarios for US conventional oil production (crude oil + condensate): • most likely estimate: a logistic curve with a logistic growth rate equal to 6%, an ultimate resource equal to 150 Giga-barrels (Gb) and a peak in 1965. • upper-bound estimate: a logistic curve with a logistic growth rate equal to 6% and ultimate resource equal to 200 Giga-barrels and a peak in 1970. Hubbert's upper-bound estimate, which he regarded as optimistic, accurately predicted that US oil production would peak in 1970. US oil production (Lower 48 states crude oil only) and Hubbert high estimate for the US. Forty years later, the upper-bound estimate has also proven to be very accurate in terms of cumulative production, less so in terms of annual production. For 2005, the upper-bound Hubbert model predicts 178.2 Gb cumulative and 1.17 Gb current production; actual US production was 176.4 Gb cumulative crude oil + condensate (1% lower than the upper bound estimate), with annual production of 1.55 Gb (32% higher than the upper bound estimate). A post-hoc analysis of peaked oil wells, fields, regions and nations found that Hubbert's model was the "most widely useful" (providing the best fit to the data), though many areas studied had a sharper "peak" than predicted.
Energy return on energy investment
When oil production first began in the mid-nineteenth century, the largest oil fields recovered fifty barrels of oil for every barrel used in the extraction, transportation and refining. This ratio is often referred to as the Energy Return on Energy Investment (EROI Oil imports by country Pre-2006 or EROEI). Currently, between one and five barrels of oil are recovered for each barrel-equivalent of energy used in the recovery process. As the EROEI drops to one, or equivalently the Net energy gain falls to zero, the oil production is no longer a net energy source. This happens long before the resource is physically exhausted. Note that it is important to understand the distinction between a barrel of oil, which is a measure of oil, and a barrel of oil equivalent (BOE), which is a measure of energy. Many sources of energy, such as fission, solar, wind, and coal, are not subject to the same near-term supply restrictions that oil is. Accordingly, even an oil source with an EROEI of 0.5 can be usefully exploited if the energy required to produce that oil comes from a cheap and plentiful
Hubbert peak theory energy source. Availability of cheap, but hard to transport, natural gas in some oil fields has led to using natural gas to fuel enhanced oil recovery. Similarly, natural gas in huge amounts is used to power most Athabasca Tar Sands plants. Cheap natural gas has also led to Ethanol fuel produced with a net EROEI of less than 1, although figures in this area are controversial because methods to measure EROEI are in debate.
Growth-based economic models
Insofar as economic growth is driven by oil consumption growth, post-peak societies must adapt. Hubbert believed:
World energy consumption & predictions, 2005-2035. Source: International Energy Outlook 2011.
Our principal constraints are cultural. During the last two centuries we have known nothing but exponential growth and in parallel we have evolved what amounts to an exponential-growth culture, a culture so heavily dependent upon the continuance of exponential growth for its stability that it is incapable of reckoning with problems of nongrowth.
Some economists describe the problem as uneconomic growth or a false economy. At the political right, Fred Ikle has warned about "conservatives addicted to the Utopia of Perpetual Growth". Brief oil interruptions in 1973 and 1979 markedly slowed - but did not stop - the growth of world GDP. Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation. David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy world population will have to be reduced by two-thirds, says the study. Without population reduction, this study predicts an agricultural crisis beginning in 2020, becoming critical c. 2050. The peaking of global oil along with the decline in regional natural gas production may precipitate this agricultural crisis sooner than generally expected. Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.
Hubbert peak theory
Although Hubbert peak theory receives most attention in relation to peak oil production, it has also been applied to other natural resources.
Doug Reynolds predicted in 2005 that the North American peak would occur in 2007. Bentley predicted a world "decline in conventional gas production from about 2020".
Although observers believe that peak coal is significantly further out than peak oil, Hubbert studied the specific example of anthracite in the USA, a high grade coal, whose production peaked in the 1920s. Hubbert found that Anthracite matches a curve closely. Pennsylvania's coal production also matches Hubbert's curve closely, but this does not mean that coal in Pennsylvania is exhausted—far from it. If production in Pennsylvania returned at its all time high, there are reserves for 190 years. Hubbert had recoverable coal reserves worldwide at 2500 × 109 metric tons and peaking around 2150 (depending on usage). More recent estimates suggest an earlier peak. Coal: Resources and Future Production (PDF 630KB), published on April 5, 2007 by the Energy Watch Group (EWG), which reports to the German Parliament, found that global coal production could peak in as few as 15 years. Reporting on this Richard Heinberg also notes that the date of peak annual energetic extraction from coal will likely come earlier than the date of peak in quantity of coal (tons per year) extracted as the most energy-dense types of coal have been mined most extensively. A second study, The Future of Coal by B. Kavalov and S. D. Peteves of the Institute for Energy (IFE), prepared for European Commission Joint Research Centre, reaches similar conclusions and states that ""coal might not be so abundant, widely available and reliable as an energy source in the future". Work by David Rutledge of Caltech predicts that the total of world coal production will amount to only about 450 gigatonnes. This implies that coal is running out faster than usually assumed. Finally, insofar as global peak oil and peak in natural gas are expected anywhere from imminently to within decades at most, any increase in coal production (mining) per annum to compensate for declines in oil or natural gas production, would necessarily translate to an earlier date of peak as compared with peak coal under a scenario in which annual production remains constant.
In a paper in 1956, after a review of US fissionable reserves, Hubbert notes of nuclear power:
There is promise, however, provided mankind can solve its international problems and not destroy itself with nuclear weapons, and provided world population (which is now expanding at such a rate as to double in less than a century) can somehow be brought under control, that we may at last have found an energy supply adequate for our needs for at least the next few centuries of the "foreseeable future."
Technologies such as the thorium fuel cycle, reprocessing and fast breeders can, in theory, considerably extend the life of uranium reserves. Roscoe Bartlett claims
Our current throwaway nuclear cycle uses up the world reserve of low-cost uranium in about 20 years.
Caltech physics professor David Goodstein has stated that
Hubbert peak theory
... you would have to build 10,000 of the largest power plants that are feasible by engineering standards in order to replace the 10 terawatts of fossil fuel we're burning today ... that's a staggering amount and if you did that, the known reserves of uranium would last for 10 to 20 years at that burn rate. So, it's at best a bridging technology ... You can use the rest of the uranium to breed plutonium 239 then we'd have at least 100 times as much fuel to use. But that means you're making plutonium, which is an extremely dangerous thing to do in the dangerous world that we live in.
Almost all helium on Earth is a result of radioactive decay of uranium and thorium. Helium is extracted by fractional distillation from natural gas, which contains up to 7% helium. The world's largest helium-rich natural gas fields are found in the United States, especially in the Hugoton and nearby gas fields in Kansas, Oklahoma, and Texas. The extracted helium is stored underground in the National Helium Reserve near Amarillo, Texas, the self-proclaimed "Helium Capital of the World". Helium production is expected to decline along with natural gas production in these areas. Helium is the second-lightest chemical element in the Universe, causing it to rise to the upper layers of Earth's atmosphere. Helium atoms are so light that the Earth's gravity field is simply not strong enough to trap helium in the atmosphere and it dissipates slowly into space and is lost forever.
Hubbert applied his theory to "rock containing an abnormally high concentration of a given metal" and reasoned that the peak production for metals such as copper, tin, lead, zinc and others would occur in the time frame of decades and iron in the time frame of two centuries like coal. The price of copper rose 500% between 2003 and 2007 and was attributed by some to peak copper. Copper prices later fell, along with many other commodities and stock prices, as demand shrank from fear of a global recession. Lithium availability is a concern for a fleet of Li-ion battery using cars but a paper published in 1996 estimated that world reserves are adequate for at least 50 years. A similar prediction for platinum use in fuel cells notes that the metal could be easily recycled.
The possibility of Peak Gold has emerged recently . Aaron Regent president of the Canadian gold giant Barrik Gold said that global output has been falling by roughly one million ounces a year since the start of the decade. The total global mine supply has dropped by 10pc as ore quality erodes, implying that the roaring bull market of the last eight years may have further to run. "There is a strong case to be made that we are already at 'peak gold'," he told The Daily Telegraph at the RBC's annual gold conference in London. "Production peaked around 2000 and it has been in decline ever since, and we forecast that decline to continue. It is increasingly difficult to find ore," he said. Ore grades have fallen from around 12 grams per tonne in 1950 to nearer 3 grams in the US, Canada, and Australia. South Africa's output has halved since peaking in 1970. Output fell a further 14 percent in South Africa in 2008 as companies were forced to dig ever deeper - at greater cost - to replace depleted reserves.
Phosphorus supplies are essential to farming and depletion of reserves is estimated at somewhere from 60 to 130 years. According to a 2008 study, the total reserves of phosphorus are estimated to be approximately 3,200 MT, with a peak production at 28 MT/year in 2034. Individual countries' supplies vary widely; without a recycling initiative America's supply is estimated around 30 years. Phosphorus supplies affect agricultural output which in turn limits alternative fuels such as biodiesel and ethanol. Its increasing price and scarcity (global price of rock phosphate rose 8-fold in the 2 years to mid 2008) could change global agricultural patterns. Lands, perceived as marginal because of remoteness, but with very high phosphorus content, such as the Gran Chaco may get more agricultural development, while other farming areas, where nutrients are a constraint, may drop below the line of
Hubbert peak theory profitability.
Hubbert's original analysis did not apply to renewable resources. However, over-exploitation often results in a Hubbert peak nonetheless. A modified Hubbert curve applies to any resource that can be harvested faster than it can be replaced. For example, a reserve such as the Ogallala Aquifer can be mined at a rate that far exceeds replenishment. This turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. These debates usually center around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining mentioned above is also water resource intensive. The term fossil water is sometimes used to describe aquifers whose water is not being recharged.
• Fisheries: At least one researcher has attempted to perform Hubbert linearization (Hubbert curve) on the whaling industry, as well as charting the transparently dependent price of caviar on sturgeon depletion. Another example is the cod of the North Sea. The comparison of the cases of fisheries and of mineral extraction tells us that the human pressure on the environment is causing a wide range of resources to go through a depletion cycle which follows a Hubbert curve.
Criticisms of peak oil
Economist Michael Lynch argues that the theory behind the Hubbert curve is too simplistic and relies on an overly Malthusian point of view. Lynch claims that Campbell's predictions for world oil production are strongly biased towards underestimates, and that Campbell has repeatedly pushed back the date. Leonardo Maugeri, vice president of the Italian energy company Eni, argues that nearly all of peak estimates do not take into account unconventional oil even though the availability of these resources is significant and the costs of extraction and processing, while still very high, are falling because of improved technology. He also notes that the recovery rate from existing world oil fields has increased from about 22% in 1980 to 35% today because of new technology and predicts this trend will continue. The ratio between proven oil reserves and current production has constantly improved, passing from 20 years in 1948 to 35 years in 1972 and reaching about 40 years in 2003. These improvements occurred even with low investment in new exploration and upgrading technology because of the low oil prices during the last 20 years. However, Maugeri feels that encouraging more exploration will require relatively high oil prices. Edward Luttwak, an economist and historian, claims that unrest in countries such as Russia, Iran and Iraq has led to a massive underestimate of oil reserves. The Association for the Study of Peak Oil and Gas (ASPO) responds by claiming neither Russia nor Iran are troubled by unrest currently, but Iraq is. Cambridge Energy Research Associates authored a report that is critical of Hubbert-influenced predictions:
Despite his valuable contribution, M. King Hubbert's methodology falls down because it does not consider likely resource growth, application of new technology, basic commercial factors, or the impact of geopolitics on production. His approach does not work in all cases-including on the United States itself-and cannot reliably model a global production outlook. Put more simply, the case for the imminent peak is flawed. As it is, production in 2005 in the Lower 48 in the United States was 66 percent higher than Hubbert projected.
CERA does not believe there will be an endless abundance of oil, but instead believes that global production will eventually follow an “undulating plateau” for one or more decades before declining slowly, and that production will reach 40 Mb/d by 2015.
Hubbert peak theory Alfred J. Cavallo, while predicting a conventional oil supply shortage by no later than 2015, does not think Hubbert's peak is the correct theory to apply to world production.
Criticisms of peak element scenarios
Although M. King Hubbert himself made major distinctions between decline in petroleum production versus depletion (or relative lack of it) for elements such as fissionable uranium and thorium, some others have predicted peaks like peak uranium and peak phosphorus soon on the basis of published reserve figures compared to present and future production. According to some economists, though, the amount of proved reserves inventoried at a time may be considered "a poor indicator of the total future supply of a mineral resource." As some illustrations, tin, copper, iron, lead, and zinc all had both production from 1950 to 2000 and reserves in 2000 much exceed world reserves in 1950, which would be impossible except for how "proved reserves are like an inventory of cars to an auto dealer" at a time, having little relationship to the actual total affordable to extract in the future. In the example of peak phosphorus, additional concentrations exist intermediate between 71,000 MT of identified reserves (USGS) and the approximately 30,000,000,000 MT of other phosphorus in Earth's crust, with the average rock being 0.1% phosphorus, so showing decline in human phosphorus production will occur soon would require far more than comparing the former figure to the 190 MT/yr of phosphorus extracted in mines (2011 figure).
Jean Laherrere, "Forecasting production from discovery", ASPO Lisbon May 19–20, 2005 (http:/ / www. cge. uevora. pt/ aspo2005/ abscom/ ASPO2005_Laherrere. pdf) J.R. Wood, Michigan Technical University Geology Department Oil Seminar 2003 (http:/ / www. geo. mtu. edu/ svl/ GE3320/ OIL SEMINAR jrw. ppt) Nuclear Energy and the Fossil Fuels,M.K. Hubbert, Presented before the Spring Meeting of the Southern District, American Petroleum Institute, Plaza Hotel, San Antonio, Texas, March 7–8-9, 1956 (http:/ / www. hubbertpeak. com/ hubbert/ 1956/ 1956. pdf) YouTube - 1976 Hubbert Clip (http:/ / www. youtube. com/ watch?v=ImV1voi41YY) Bartlett A.A 1999 , "An Analysis of U.S. and World Oil Production Patterns Using Hubbert-Style Curves." (http:/ / www. hubbertpeak. com/ bartlett/ hubbert. htm) Mathematical Geology. Hubbert’s Petroleum Production Model: An Evaluation and Implications for World Oil Production Forecasts, Alfred J. Cavallo, Natural Resources Research,Vol. 13,No. 4, December 2004 (http:/ / www. springerlink. com/ content/ q363778431537157/ fulltext. pdf) Laherrère, J.H. (Feb 18 2000). "The Hubbert curve : its strengths and weaknesses" (http:/ / dieoff. org/ page191. htm). http:/ / dieoff. org. . Retrieved September 16 2011. Brandt, A. R. (2007). "Testing Hubbert" (http:/ / linkinghub. elsevier. com/ retrieve/ pii/ S0301421506004265). Energy Policy 35 (5): 3074–3088. doi:10.1016/j.enpol.2006.11.004. . http:/ / www. hubbertpeak. com/ hubbert/ wwf1976/ http:/ / dieoff. org/ page68. htm http:/ / www. imf. org/ external/ np/ speeches/ 2006/ pdf/ 050206. pdf How peak oil could lead to starvation (http:/ / wolf. readinglitho. co. uk/ mainpages/ agriculture. html) Eating Fossil Fuels | EnergyBulletin.net (http:/ / www. energybulletin. net/ 281. html) Peak Oil: the threat to our food security (http:/ / www. soilassociation. org/ peakoil) Agriculture Meets Peak Oil (http:/ / europe. theoildrum. com/ node/ 2225) White, Bill (December 17, 2005). "State's consultant says nation is primed for using Alaska gas" (http:/ / dwb. adn. com/ money/ industries/ oil/ v-printer/ story/ 7296501p-7208184c. html). Anchorage Daily News. . Bentley, R.W. (2002). "Viewpoint - Global oil & gas depletion: an overview" (http:/ / www. oilcrisis. com/ bentley/ depletionOverview. pdf) (PDF). Energy Policy 30 (3): 189–205. doi:10.1016/S0301-4215(01)00144-6. . GEO 3005: Earth Resources (http:/ / www. geo. umn. edu/ courses/ 3005/ resource. html) http:/ / www. energywatchgroup. org/ files/ Coalreport. pdf http:/ / www. energybulletin. net/ 29919. html http:/ / www. richardheinberg. com/ museletter/ 179 "Coal: Bleak outlook for the black stuff", by David Strahan, New Scientist, January 19, 2008, pp. 38-41 (http:/ / environment. newscientist. com/ channel/ earth/ mg19726391. 800-coal-bleak-outlook-for-the-black-stuff. html). http:/ / www. hubbertpeak. com/ hubbert/ 1956/ 1956. pdf http:/ / www. bartlett. house. gov/ uploadedfiles/ 5-2-06%20Oil%20Speech. pdf
Hubbert peak theory
http:/ / www. energybulletin. net/ 3322. html Kockarts, G. (1973). "Helium in the Terrestrial Atmosphere". Space Science Reviews (Space Science Reviews) 14 (6, pp.723–757): 723. Bibcode 1973SSRv...14..723K. doi:10.1007/BF00224775. http:/ / www. hubbertpeak. com/ hubbert/ wwf1976 (http:/ / minerals. usgs. gov/ minerals/ pubs/ commodity/ copper/ mcs-2008-coppe. pdf) Copper Statistics and Information, 2007]. USGS Andrew Leonard (2006-03-02). "Peak copper?" (http:/ / www. salon. com/ tech/ htww/ 2006/ 03/ 02/ peak_copper/ index. html). Salon How the World Works. . Retrieved 2008-03-23. http:/ / news. silverseek. com/ CharlestonVoice/ 1135873932. php COMMODITIES-Demand fears hit oil, metals prices (http:/ / uk. reuters. com/ article/ oilRpt/ idUKN2747917920090129), January 29, 2009. http:/ / cat. inist. fr/ ?aModele=afficheN& cpsidt=2530187 http:/ / www. dft. gov. uk/ stellent/ groups/ dft_roads/ documents/ page/ dft_roads_024056-01. hcsp http:/ / www. telegraph. co. uk/ finance/ newsbysector/ industry/ mining/ 6546579/ Barrick-shuts-hedge-book-as-world-gold-supply-runs-out. html http:/ / www. apda. pt/ apda_resources/ APDA. Biblioteca/ eureau%5Cposition%20papers%5Cthe%20reuse%20of%20phosphorus. pdf Stuart White, Dana Cordell (2008). "Peak Phosphorus: the sequel to Peak Oil" (http:/ / phosphorusfutures. net/ peak-phosphorus). Global Phosphorus Research Initiative (GPRI). . Retrieved 2009-12-11. http:/ / minerals. usgs. gov/ minerals/ pubs/ commodity/ phosphate_rock/ phospmcs06. pdf http:/ / www. ecosanres. org/ PDF%20files/ Fact_sheets/ ESR4lowres. pdf Don Nicol. "A postcard from the central Chaco" (http:/ / www. breedleader. com. au/ images/ chaco postcard . pdf). . Retrieved 2009-01-23. "alluvial sandy soils have phosphorus levels of up to 200-300 ppm" Meena Palaniappan and Peter H. Gleick (2008). "The World's Water 2008-2009, Ch 1." (http:/ / www. worldwater. org/ data20082009/ ch01. pdf). Pacific Institute. . Retrieved 2009-01-31. http:/ / www. uswaternews. com/ archives/ arcsupply/ 6worllarg2. html http:/ / www. earth-policy. org/ Updates/ 2005/ Update47_data. htm http:/ / www. epa. gov/ cleanrgy/ water_resource. htm http:/ / www. aspoitalia. net/ index. php?option=com_content& task=view& id=34& Itemid=39 http:/ / www. hubbertpeak. com/ laherrere/ multihub. htm http:/ / www. energyseer. com/ MikeLynch. html http:/ / www. energyseer. com/ NewPessimism. pdf http:/ / www. hubbertpeak. com/ Lynch/ Campbell, CJ (2005). Oil Crisis. Brentwood, Essex, England: Multi-Science Pub. Co.. pp. 90. ISBN 0-906522-39-0. Maugeri, L. (2004). "Oil: Never Cry Wolf—Why the Petroleum Age Is Far from over" (http:/ / www. condition. org/ sm4602. htm). Science 304 (5674): 1114–5. doi:10.1126/science.1096427. PMID 15155935. . "Oil, Oil Everywhere" (http:/ / www. forbes. com/ home/ free_forbes/ 2006/ 0724/ 042. html). Forbes. July 24, 2006. . "The truth about global oil supply" (http:/ / www. thefirstpost. co. uk/ index. php?menuID=1& subID=18). . http:/ / www. peakoil. net/ Luttwak. html http:/ / cera. ecnext. com/ coms2/ summary_0236-821_ITM CERA says peak oil theory is faulty (http:/ / www. energybulletin. net/ 22381. html) Energy Bulletin November 14, 2006 http:/ / www. energybulletin. net/ node/ 19120 http:/ / www. energybulletin. net/ 6271. html Nuclear Energy and the Fossil Fuels (http:/ / www. energybulletin. net/ node/ 13630). M. King Hubbert. American Petroleum Institute Conference, March 8th, 1956. Republished on March 8th, 2006, by the Energy Bulletin. Accessed May 21st, 2012. James D. Gwartney, Richard L. Stroup, Russell S. Sobel, David MacPherson. Economics: Private and Public Choice, 12th Edition. South-Western Cengage Learning, page 730. extract, accessed 5-20-2012 (http:/ / books. google. com/ books?id=yIbH4R77OtMC& pg=PA730|Online) U.S. Geological Survey Phosphate Rock (http:/ / minerals. usgs. gov/ minerals/ pubs/ commodity/ phosphate_rock/ mcs-2012-phosp. pdf) American Geophysical Union, Fall Meeting 2007, abstract #V33A-1161. Mass and Composition of the Continental Crust (http:/ / adsabs. harvard. edu/ abs/ 2007AGUFM. V33A1161P) Greenwood, N. N.; & Earnshaw, A. (1997). Chemistry of the Elements (2nd Edn.), Oxford:Butterworth-Heinemann. ISBN 0-7506-3365-4.
Hubbert peak theory
• "Feature on United States oil production." (November, 2002) ASPO Newsletter #23 (http://www.peakoil.ie/ downloads/newsletters/newsletter23_200211.pdf). • Greene, D.L. & J.L. Hopson. (2003). Running Out of and Into Oil: Analyzing Global Depletion and Transition Through 2050 (http://www-cta.ornl.gov/cta/Publications/Reports/ORNL_TM_2003_259.pdf) ORNL/TM-2003/259, Oak Ridge National Laboratory, Oak Ridge, Tennessee, October • Economists Challenge Causal Link Between Oil Shocks And Recessions (http://www.mafhoum.com/press7/ 204E14.htm) (August 30, 2004). Middle East Economic Survey VOL. XLVII No 35 • Hubbert, M.K. (1982). Techniques of Prediction as Applied to Production of Oil and Gas, US Department of Commerce, NBS Special Publication 631, May 1982
Sites • U.S. Energy Information Agency Petroleum Data (http://www.eia.doe.gov/oil_gas/petroleum/info_glance/ petroleum.html) • Association for the Study of Peak Oil (http://www.peakoil.net) • • • • • • • • PeakOil.com (http://www.peakoil.com) Oil Depletion Analysis Centre (http://www.odac-info.org/) in the United Kingdom PowerSwitch (http://www.powerswitch.org.uk/) in the United Kingdom Energy Bulletin (http://www.energybulletin.net) Peak Oil related articles The Oil Drum (http://www.theoildrum.com/) Discussions about Energy and our Future Carbon War (http://www.carbonwar.co.uk/) Global Oil Watch (http://www.globaloilwatch.com) - Extensive peak oil library Energy Export Databrowser (http://mazamascience.com/OilExport/)-Visual review of production and consumption trends for individual nations; data from the British Petroleum Statistical Review
Documentaries • The Oilcrash (http://www.oilcrashmovie.com/film.html), 2006 Online videos • M. King Hubbert speaking on Peak Oil in 1976 (http://www.mkinghubbert.com/) Articles • M. King Hubbert on the Nature of Growth. 1974 (http://www.technocracy.org/natureofgrowth.htm) • El mundo ante el cenit del petróleo (http://www.crisisenergetica.org/staticpages/index. php?page=200509171321310) Fernando Bullón Miró • M. King Hubbert, "Energy from Fossil Fuels" (http://www.hubbertpeak.com/hubbert/science1949/), Science, vol. 109, pp. 103–109, February 4, 1949 • Technocracy, Hubbert and peak oil (http://www.technocracy.org/technocrat/Technocracy2.pdf) Article from The North American Technocrat • David Hughes on Canadian Oil and Gas (http://www.globalpublicmedia.com/transcripts/827) Transcribed interview of a Geologist with the Geological Survey of Canada. 30 December 2006. • Aviation & Peak Oil (http://www.oildecline.com/airways.pdf) Airways Magazine article by Analyst / Economist (July 2006) Reports, essays and lectures • Doctoral thesis about Peak Oil (http://publications.uu.se/abstract.xsql?dbid=7625)
Hubbert peak theory • Review: Oil-based technology and economy - prospects for the future (http://www.tekno.dk/subpage. php3?article=1025&toppic=kategori11&language=uk&category=11/) The Danish Board of Technology (Teknologirådet) • Peakoil conference 19-20 October 2004 (http://www.gasandoil.com/peakoil) • Graph showing oil production in lower 48 US states following Hubbert's predictions (http://www.hubbertpeak. com/blanchard/) • Trends in Oil Supply and Demand, Potential for Peaking of Conventional Oil Production, and Possible Mitigation Options (http://darwin.nap.edu/books/0309101433/html/3.html): A Summary Report of the Workshop (2006), National Research Council • The End of Oil, essay 1.pdf (http://www.physics.otago.ac.nz/eman/The End of Oil essay 1.pdf), Very concise peak-oil study by Bob Lloyd, July 2005 • Peak Oil Theory – “World Running Out of Oil Soon” – Is Faulty; Could Distort Policy & Energy Debate (http:// www.cera.com/aspx/cda/public1/news/pressReleases/pressReleaseDetails.aspx?CID=8444)
Article Sources and Contributors
Article Sources and Contributors
Hubbert peak theory Source: http://en.wikipedia.org/w/index.php?oldid=516079259 Contributors: ...---...SOS, 100110100, 28421u2232nfenfcenc, 8bitJake, A. B., Aarondunlap, AbendigoReebs, Acather96, Acitrano, Adamfenderson, Adrian Firth, Adrian.benko, Adso, Aerobar, Aetheling, Agape bright, Airconswitch, Alain10, Alan Liefting, Alan Oldfield, Alantex, Alex Ramon, Alex.tan, AlphBetaFive555111, Alsadius, Altenmann, Altermike, Altonbr, Amadeust, Americanus, AmiDaniel, AndrasCz, Anton Markov, Aquamarine, Argo Navis, Arthena, Arthur Rubin, Ashley Pomeroy, Astack, Astrometrics, AussieLegend, Avenue, Azate, BD2412, Bakulan, Beagel, Behun, Beland, Belligero, BenjaS, Benjamin Gatti, BerserkerBen, Bigcheesebebbs, BillJamesMN, Binesh@hex21.com, Biolane, Biruitorul, Bissinger, Bk0, BlaiseFEgan, Blazotron, Bletch, Bmicomp, Bobblewik, Bobo159, Bonadea, BozMo, Bozoid, BradBeattie, Brian0918, Brianjd, BrokenSegue, Bryan Derksen, Bucketsofg, Bunchy32, Byelf2007, CLW, Calvados, Capsela, Carbonate, Cassini83, Causa sui, Ceeceef, CesarB, Chairman S., Chaler, Charles Matthews, Chase me ladies, I'm the Cavalry, Chhe, Chris Thompson, ChrisHardie, Christopherk, Chuck Simmons, Cjackb, Claimsfour, Closedmouth, CloudNine, Cmdrjameson, Cooliop, Corporate.legal, Crawfose, Crubins, Crumbling city, Crusty007, Culmination, Curps, CurtisSwain, DARTH SIDIOUS 2, DLH, DNewhall, Dalf, Damian Yerrick, Dan100, Darraghf, DaveKammeyer, David D., David DuByne, David Haslam, David Latapie, David.Monniaux, Dbabbitt, Dbenbenn, Dcandeto, Delphi234, Denverjeffrey, Deus Ex, Dissident, Dmillman, Docketrocket, DodgeTheBullet, DopefishJustin, Dragons flight, Drdannyu, Drifterbill, Drowstar, Drunken Pirate, Dupz, Dyfrgi, Dylan Lake, EZ, Ecksemmess, Eclipsenow.org, Ecozeppelin, Ed Poor, Edward, Ekotekk, El C, ElBenevolente, Ellywa, EmmSeeMusic, End of Suburbia, Ender3057, Envirocorrector, Environnement2100, Epipelagic, Eric Kvaalen, Ericwahlforss, Esprit15d, Everyking, Evil saltine, Evolauxia, Eyreland, FCYTravis, Fact idiot, Feil0014, Fils du Soleil, FloreatAntiquaDomus, Flux.books, Force10, Fourtildas, Freakofnurture, Fredhutter, Fredrik, FreplySpang, Freshraisin, Fs, Futurebird, G-Man, Gahya6, Gail T Berg, Gaius Cornelius, Gazpacho, Gem, GeoGreg, GeorgeBills, Giftlite, Gilliam, Goethean, Gonvaled, GraemeL, GrahamP, Gralo, GregAsche, GregorB, Gunnerson, H0riz0n, HGB, Hadal, Halgin, Haligonian1, Hankwang, Harel, Harmil, Harriv, Hartz, Hawstom, Hebrides, Heron, Hibernian, Hmoul, Hoary, Hoplon, Horos, Hpuppet, Hu12, HubHikari, Husond, Hydrargyrum, ILike2BeAnonymous, Iain.mcclatchie, Ianare, Icd, Ikkyu2, Infamous.angel, Irrbloss, JFG, JForget, JRM, Jakob mark, JamesE, JamesMLane, Jammoe, Jannex, Japanese Searobin, Jasenlee, JayDee562, Jbamb, Jbobj, Jclaer, Jclockhart, Jcriddle4, Jcw69, Jeffq, Jensbn, Jersey Devil, Jerzy, Jesushnice, Jfcj1, Jguzzi, Jkintree, JoeSmack, Joebrell, Joel7687, John Vandenberg, John187, Johnfos, Johnrgrace, Johntex, Joli Rouge, Jonathan Callahan, Jonathan.s.kt, Jonathanischoice, Jorfer, Joseph Solis in Australia, Jpeob, Jphillips, Jtc244, Jvano, Jvgama, Jwanders, Kaimiddleton, Kaldari, Karada, KelleyCook, Keraunos, Kevin Hayes, Kgrr, Khebab, Kipio, Kiplos2, Kpe, Krytonix, Ksargent, Kstailey, Kukini, Kwekubo, La goutte de pluie, Largoplazo, LeadSongDog, Legolost, Leighm@Linuxbandwagon.com, Leotohill, Lesqual, Lightmouse, Lkruijsw, Lokiloki, Lowe4091, MBisanz, MKoltnow, Malcolm Farmer, Male1979, Marcus7905, Marisol1960, Marknau, Marskell, Martin6758, Martpol, Matthew king, Matthias5, MauriceJFox3, Meelar, Meggar, Merkury930, Mervyn, Metamagician3000, Mexeno1, Mgc0wiki, MiPe, Michael Devore, Michael Hardy, Michel32Nl, Mierlo, Mike Van Emmerik, Mike18xx, Milk, Miltonn, Mishlai, Mjensen@nas.edu, Mmadore, Moncrief, Moral Defense, Mostlyharmless, Mr. Billion, MrVoluntarist, Mrbay, Mrfebruary, Mrmuk, Mshecket, Mullibok, Mårten Berglund, NJGW, Nad, Nagle, Narssarssuaq, Nat Krause, Nathan Johnson, Nehrams2020, NeuronExMachina, Neutrality, Nfr-Maat, NiN, Nickptar, Niels G. Mortensen, Nigelj, Nikai, Nkocharh, Noosfractal, Nova77, Novasource, Nposs, Nunquam Dormio, Ohnoitsjamie, Oliver Crow, One Salient Oversight, Oo64eva, Ordinary Person, Orzetto, Oscabat, Otto ter Haar, Pablo-flores, Palica, PatrikR, Pearle, Peer V, Pegship, Pennbradly, Perceval, Petri Krohn, Pixel ;-), Pjacobi, Plazak, Plinkit, Pmsyyz, Pogue, Polonium, Polynova, Pope Guilty, Poszwa, Prichardson, Psantora, Pstudier, Publicus, PurpleRain, Pwqn, Quadell, Quasipalm, Quercusrobur, Qutezuce, RGRulz, Racepacket, Ragesoss, RainmakerUSA, Rajah, RandomP, Randwicked, Raul654, RaveX, Rd232, Rdeckard, Rdsmith4, Registerx, Rei, Remus Lupin, RexNL, Rich Farmbrough, Richard Cane, Richhoncho, Riction, Rifleman 82, Riurik, Rjstott, Rjwilmsi, Roadrunner, Robert Merkel, Robsavoie, Rock nj, Roke, RoodyBeep, RoyBoy, Rramir16, Rrronny, Rtdrury, Ruber chiken, Rujholla, Run!, Rupertslander, Ryanrahn, Rylincoln, SWAdair, Salsb, Sam Hocevar, Sandrakop, Sango123, Saravask, SchreiberBike, ScienceApologist, Sdalmonte, Sekicho, Septegram, Sfoucher, Shadow1, Shanes, Shiftchange, Sietse Snel, Signalhead, SimonP, Simonf, Skier Dude, Skinnyweed, Skipsievert, Skol, Skyemoor, Slakr, Smartse, Smurrayinchester, Snappy, Sonetlumiere, Sonett72, SparhawkWiki, Splintercellguy, SqueakBox, StephanieM, Sterlingda, SteveCoast, Striver, Struct18, Subversive element, Sunray, Superm401, Sylvain1972, T Long, T34CH, Talented Mr Miller, TastyCakes, Taxman, Techmission, Template namespace initialisation script, Teratornis, Terjepetersen, The Anome, The Giant Puffin, The Rambling Man, Theanphibian, Tide rolls, Timc, Timharwoodx, Timwi, Tlogmer, TobyK, Tomnason1010, Topbanana, Tpb, Tregoweth, Trevdna, Tsingi, Turnstep, Tweenk, Typelighter, Ultramarine, Uris, User2004, Verne Equinox, Vextration, Viriditas, Vjrj, WCityMike, WPB1995, Wafulz, Wahnee86, Wavelength, WebHubTel, Wfgiuliano, Whitepaw, Wik, WikiCats, Wikiborg, William Banting, William M. Connolley, Wiyowiyo, Wizzard2k, Wobble, WpZurp, Wragge, X4096, Xip, Xxdanbrowne, Xyzzyplugh, Yath, Yeldarbnothsa, Yill577, Ylem, Yorrose, Zen & the art of idiocy, Zen-master, Zenupassio, Zondor, Zven, ^demon, 1469 anonymous edits
Image Sources, Licenses and Contributors
File:Hubbert world 2004.svg Source: http://en.wikipedia.org/w/index.php?title=File:Hubbert_world_2004.svg License: Public Domain Contributors: Governo USA. Original uploader was Rrronny at it.wikipedia File:Hubbert-curve.png Source: http://en.wikipedia.org/w/index.php?title=File:Hubbert-curve.png License: Public Domain Contributors: Gralo, Herbythyme, M0tty, Mormegil, 2 anonymous edits File:US Oil Production and Imports 1920 to 2005.png Source: http://en.wikipedia.org/w/index.php?title=File:US_Oil_Production_and_Imports_1920_to_2005.png License: GNU Free Documentation License Contributors: David Moe RockyMtnGuy. Original uploader was RockyMtnGuy at en.wikipedia File:Norway Hubbert.svg Source: http://en.wikipedia.org/w/index.php?title=File:Norway_Hubbert.svg License: GNU Free Documentation License Contributors: Gralo, Raminagrobis, Ranveig, 3 anonymous edits File:Hubbert US high.svg Source: http://en.wikipedia.org/w/index.php?title=File:Hubbert_US_high.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: S. Foucher (Sfoucher) File:Oil imports.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Oil_imports.PNG License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Halgin, Jonkerz, Laur2ro, Roke, 2 anonymous edits File:World energy consumption 2005-2035 EIA.png Source: http://en.wikipedia.org/w/index.php?title=File:World_energy_consumption_2005-2035_EIA.png License: Creative Commons Zero Contributors: User:Delphi234
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/
|
East Pakistan was the eastern provincial wing of Pakistan between 1955 and 1971, covering the territory of the modern country Bangladesh. Its land borders were with India and Burma, with a coastline on the Bay of Bengal.
East Pakistan was renamed from East Bengal by the One Unit scheme of Prime Minister Mohammad Ali of Bogra. The Constitution of Pakistan of 1956 replaced the British monarchy with an Islamic republic. Bengali politician H. S. Suhrawardy served as the Prime Minister of Pakistan between 1956 and 1957. A Bengali bureaucrat Iskandar Mirza became the first President of Pakistan. The 1958 Pakistani coup d'état brought general Ayub Khan to power. Khan replaced Mirza as president and launched a crackdown against pro-democracy leaders. Khan enacted the Constitution of Pakistan of 1962 which ended universal suffrage. By 1966, Sheikh Mujibur Rahman emerged as the preeminent opposition leader in Pakistan and launched the six point movement for autonomy and democracy. The 1969 uprising in East Pakistan contributed to Ayub Khan's overthrow. Another general, Yahya Khan, usurped the presidency and enacted martial law. In 1970, a major tropical cyclone struck East Pakistan, causing major casualties. In the same year, Yahya Khan organized Pakistan's first federal general election. The Awami League emerged as the single largest party, followed by the Pakistan Peoples Party. The military junta stalled in accepting the results, leading to civil disobedience, the Bangladesh Liberation War and the 1971 Bangladesh genocide. East Pakistan seceded with the help of India.
The East Pakistan Provincial Assembly was the legislative body of the territory.
Due to the strategic importance of East Pakistan, the Pakistani union was a member of the Southeast Asia Treaty Organization. The economy of East Pakistan grew at an average of 2.6% between 1960 and 1965. The federal government invested more funds and foreign aid in West Pakistan, even though East Pakistan generated a major share of exports. However, President Ayub Khan did implement significant industrialization in East Pakistan. The Kaptai Dam was built in 1965. The Eastern Refinery was established in Chittagong. Dacca was declared as the second capital of Pakistan and planned as the home of the national parliament. The government recruited American architect Louis Kahn to design the national assembly complex in Dacca.
|Eastern provincial wing of Pakistan|
|Languages||Bengali, Urdu and English|
|Government||Parliamentary constitutional monarchy (1955–1956)
Parliamentary democracy under an Islamic republic (1956–1958)
Martial law (1958–1962)
Presidential republic (1962–1970)
Martial law (1970–1971)
|•||One Unit||14 October 1955|
|•||Surrender of Pakistan||16 December 1971|
|Area||147,610 km2 (56,990 sq mi)|
|Today part of||Bangladesh|
In 1955, Prime Minister Mohammad Ali Bogra implemented the One Unit scheme which merged the four western provinces into a single unit called West Pakistan while East Bengal was renamed as East Pakistan.
Pakistan ended its dominion status and adopted a republican constitution in 1956, which proclaimed an Islamic republic. The populist leader H. S. Suhrawardy of East Pakistan was appointed as the Prime Minister of Pakistan. As soon as he became the prime minister, Suhrawardy initiated a legal work reviving the joint electorate system. There was a strong opposition and resentment to the joint electorate system in West Pakistan. The Muslim League had taken the cause to the public and began calling for implementation of separate electorate system. In contrast to West Pakistan, the joint electorate was highly popular in East Pakistan. The tug of war with the Muslim League to establish the appropriate electorate caused problems for his government.
The constitutionally obliged National Finance Commission Program (NFC Program) was immediately suspended by Prime Minister Suhrawardy despite the reserves of the four provinces of the West Pakistan in 1956. Suhrawardy advocated for the USSR-based Five-Year Plans to centralize the national economy. In this view, the East Pakistan's economy was quickly centralized and all major economic planning shifted to West Pakistan.
Efforts leading to centralizing the economy was met with great resistance in West Pakistan when the elite monopolist and the business community angrily refused to oblige to his policies. The business community in Karachi began its political struggle to undermine any attempts of financial distribution of the US$10 million ICA aid to the better part of the East Pakistan and to set up a consolidated national shipping corporation. In the financial cities of West Pakistan, such as Karachi, Lahore, Quetta, and Peshawar, there were series of major labour strikes against the economic policies of Suhrawardy supported by the elite business community and the private sector.
Furthermore, in order to divert attention from the controversial One Unit Program, Prime Minister Suhrawardy tried to end the crises by calling a small group of investors to set up small business in the country. Despite many initiatives and holding off the NFC Award Program, Suhrawardy's political position and image deteriorated in the four provinces in West Pakistan. Many nationalist leaders and activists of the Muslim League were dismayed with the suspension of the constitutionally obliged NFC Program. His critics and Muslim League leaders observed that with the suspension of NFC Award Program, Suhrawardy tried to give more financial allocations, aids, grants, and opportunity to East-Pakistan than West Pakistan, including West Pakistan's four provinces. During the last days of his Prime ministerial years, Suhrawardy tried to remove the economic disparity between the Eastern and Western wings of the country but to no avail. He also tried unsuccessfully to alleviate the food shortage in the country.
Suhrawardy strengthened relations with the United States by reinforcing Pakistani membership in the Central Treaty Organization and Southeast Asia Treaty Organization. Suhrawardy also promoted relations with the People’s Republic of China. His contribution in formulating the 1956 constitution of Pakistan was substantial as he played a vital role in incorporating provisions for civil liberties and universal adult franchise in line with his adherence to parliamentary form of liberal democracy.
In 1958, President Iskandar Mirza enacted martial law as part of a military coup by the Pakistan Army's chief Ayub Khan. Roughly after two weeks, President Mirza's relations with Pakistan Armed Forces deteriorated leading Army Commander General Ayub Khan relieving the president from his presidency and forcefully exiling President Mirza to the United Kingdom. General Ayub Khan justified his actions after appearing on national radio declaring that: "the armed forces and the people demanded a clean break with the past...". Until 1962, the martial law continued while Field Marshal Ayub Khan purged a number of politicians and civil servants from the government and replaced them with military officers. Ayub called his regime a "revolution to clean up the mess of black marketing and corruption.". Khan replaced Mirza as president and became the country’s strongman for eleven years. Martial law continued until 1962 when the government of Field Marshal Ayub Khan commissioned a constitutional bench under Chief Justice of Pakistan Muhammad Shahabuddin, composed of ten senior justices, each five from East Pakistan and five from West Pakistan. On 6 May 1961, the commission sent its draft to President Ayub Khan. He thoroughly examined the draft while consulting with his cabinet.
In January 1962, the cabinet finally approved the text of the new constitution, promulgated by President Ayub Khan on 1 March 1962, which came into effect on 8 June 1962. Under the 1962 constitution, Pakistan became a presidential republic. Universal suffrage was abolished in favor of a system dubbed 'Basic Democracy'. Under the system, an electoral college would be responsible for electing the president and national assembly. The 1962 constitution created a gubernatorial system in West and East Pakistan. Each provinces ran their own separate provincial gubernatorial governments. The constitution defined a division of powers between the central government and the provinces. Fatima Jinnah received strong support in East Pakistan during her failed bid to unseat Ayub Khan in the 1965 presidential election.
Dacca was declared as the second capital of Pakistan in 1962. It was designated as the legislative capital and Louis Kahn was tasked with designing a national assembly complex. Dacca's population increased in the 1960s. Seven natural gas fields were tapped in the province. The petroleum industry developed as the Eastern Refinery was established in the port city of Chittagong.
In 1966, Awami League leader Sheikh Mujibur Rahman announced the six point movement in Lahore. The movement demanded greater provincial autonomy and the restoration of democracy in Pakistan. Rahman was indicted for treason during the Agartala Conspiracy Case after launching the six point movement. He was released in the 1969 uprising in East Pakistan, which ousted Ayub Khan from the presidency. Below includes the historical six points:-
- The Constitution should provide for a Federation of Pakistan in its true sense based on the Lahore Resolution, and the parliamentary form of government with supremacy of a Legislature directly elected on the basis of universal adult franchise.
- The federal government should deal with only two subjects: Defence and Foreign Affairs, and all other residual subjects should be vested in the federating states.
- Two separate, but freely convertible currencies for two wings should be introduced; or if this is not feasible, there should be one currency for the whole country, but effective constitutional provisions should be introduced to stop the flight of capital from East to West Pakistan. Furthermore, a separate Banking Reserve should be established and separate fiscal and monetary policy be adopted for East Pakistan.
- The power of taxation and revenue collection should be vested in the federating units and the federal centre would have no such power. The federation would be entitled to a share in the state taxes to meet its expenditures.
- There should be two separate accounts for the foreign exchange earnings of the two wings; the foreign exchange requirements of the federal government should be met by the two wings equally or in a ratio to be fixed; indigenous products should move free of duty between the two wings, and the constitution should empower the units to establish trade links with foreign countries.
- East Pakistan should have a separate military or paramilitary force, and Navy headquarters should be in East Pakistan.
Ayub Khan was replaced by general Yahya Khan who became the Chief Martial Law Administrator. Khan organized the Pakistani general election, 1970. The 1970 Bhola cyclone was one of the deadliest natural disasters of the 20th century. The cyclone claimed half a million lives. The disastrous effects of the cyclone caused huge resentment against the federal government. After a decade of military rule, East Pakistan was a hotbed of Bengali nationalism. There were open calls for self-determination.
When the federal general election was held, the Awami League emerged as the single largest party in the Pakistani parliament. The League won 167 out of 169 seats in East Pakistan, thereby crossing the half way mark of 150 in the 300-seat National Assembly of Pakistan. In theory, this gave the League the right to form a government under the Westminster tradition. But the League failed to win a single seat in West Pakistan, where the Pakistan Peoples Party emerged as the single largest party with 81 seats. The military junta stalled the transfer of power and conducted prolonged negotiations with the League. A civil disobedience movement erupted across East Pakistan demanding the convening of parliament. Rahman announced a struggle for independence from Pakistan during a speech on 7 March 1971. Between 7–26 March, East Pakistan was virtually under the popular control of the Awami League. On Pakistan's Republic Day on 23 March 1971, the first flag of Bangladesh was hoisted in many East Pakistani households. The Pakistan Army launched a crackdown on 26 March, including Operation Searchlight and the 1971 Dhaka University massacre. This led to the Bangladeshi Declaration of Independence.
As the Bangladesh Liberation War and the 1971 Bangladesh genocide continued for nine months, East Pakistani military units like the East Bengal Regiment and the East Pakistan Rifles defected to form the Bangladesh Forces. The Provisional Government of Bangladesh allied with neighboring India which intervened in the final two weeks of the war and secured the surrender of Pakistan.
With Ayub Khan ousted from office in 1969, Commander of the Pakistani Army, General Yahya Khan became the country's second ruling chief martial law administrator. Both Bhutto and Mujib strongly disliked General Khan, but patiently endured him and his government as he had promised to hold an election in 1970. During this time, strong nationalistic sentiments in East Pakistan were perceived by the Pakistani Armed Forces and the central military government. Therefore, Khan and his military government wanted to divert the nationalistic threats and violence against non-East Pakistanis. The Eastern Military High Command was under constant pressure from the Awami League, and requested an active duty officer to control the command under such extreme pressure. The high flag rank officers, junior officers and many high command officers from the Pakistan's Armed Forces were highly cautious about their appointment in East-Pakistan, and the assignment of governing East Pakistan and appointment of an officer was considered highly difficult for the Pakistan High Military Command.
East Pakistan's Armed Forces, under the military administrations of Major-General Muzaffaruddin and Lieutenant-General Sahabzada Yaqub Khan, used an excessive amount of show of military force to curb the uprising in the province. With such action, the situation became highly critical and civil control over the province slipped away from the government. On 24 March, dissatisfied with the performance of his generals, Yahya Khan removed General Muzaffaruddin and General Yaqub Khan from office on 1 September 1969. The appointment of a military administrator was considered quite difficult and challenging with the crisis continually deteriorating. Vice-Admiral Syed Mohammad Ahsan, Chief of Naval Staff of Pakistan Navy, had previously served as political and military adviser of East Pakistan to former President Ayub Khan. Having such a strong background in administration, and being an expert on East Pakistan affairs, General Yahya Khan appointed Vice-Admiral Syed Mohammad Ahsan as Martial Law Administrator, with absolute authority in his command. He was relieved as Chief of Naval Staff, and received extension from the government. On 1 September Admiral Ahsan assumed the command of the Eastern Military High Command, and became a unified commander of Pakistan Armed Forces in East Pakistan. Under his command, the Pakistani Armed Forces were removed from the cities and deployed along the border. The rate of violence in East Pakistan dropped, nearly coming to an end. Civil rule improved and stabilised in East Pakistan under Martial Law Administrator Admiral Ahsan's era.
The tense relations between East and West Pakistan reached a climax in 1970 when the Awami League, the largest East Pakistani political party, led by Sheikh Mujibur Rahman, (Mujib), won a landslide victory in the national elections in East Pakistan. The party won 160 of the 162 seats allotted to East Pakistan, and thus a majority of the 300 seats in the Parliament. This gave the Awami League the constitutional right to form a government without forming a coalition with any other party. Khan invited Mujib to Rawalpindi to take the charge of the office, and negotiations took place between the military government and the Awami Party. Bhutto was shocked with the results, and threatened his fellow Peoples Party members if they attended the inaugural session at the National Assembly, famously saying he would "break the legs" of any member of his party who dared enter and attend the session. However, fearing East Pakistani separatism, Bhutto demanded Mujib to form a coalition government. After a secret meeting held in Larkana, Mujib agreed to give Bhutto the office of presidency with Mujib as prime minister. General Yahya Khan and his military government were kept unaware of these developments and under pressure from his own military government, refused to allow Rahman to become the Prime Minister of Pakistan. This increased agitation for greater autonomy in East Pakistan. The Military Police arrested Mujib and Bhutto and placed them in Adiala Jail in Rawalpindi. The news spread like a fire in both East and West Pakistan, and the struggle for independence began in East Pakistan.
The senior high command officers in Pakistan Armed Forces, and Zulfikar Ali Bhutto, began to pressure General Yahya Khan to take armed action against Mujib and his party. Bhutto later distanced himself from Yahya Khan after he was arrested by Military Police along with Mujib. Soon after the arrests, a high level meeting was chaired by Yahya Khan. During the meeting, high commanders of Pakistan Armed Forces unanimously recommended an armed and violent military action. East Pakistan's Martial Law Administrator Admiral Ahsan, unified commander of Eastern Military High Command (EMHC), and Air Marshal Mitty Masud, Commander of Eastern Air Force Command (EAFC), were the only officers to object to the plans. When it became obvious that a military action in East Pakistan was inevitable, Admiral Ahsan resigned from his position as martial law administrator in protest, and immediately flew back to Karachi, West Pakistan. Disheartened and isolated, Admiral Ahsan took early retirement from the Navy and quietly settled in Karachi. Once Operation Searchlight and Operation Barisal commenced, Air Marshal Masud flew to West Pakistan, and unlike Admiral Ahsan, tried to stop the violence in East Pakistan. When he failed in his attempts to meet General Yahya Khan, Masud too resigned from his position as Commander of Eastern Air Command, and took retirement from Air Force.
Lieutenant-General Sahabzada Yaqub Khan was sent into East Pakistan in emergency, following a major blow of the resignation of Vice Admiral Ahsan. General Yaqub temporarily assumed the control of the province, as he was made the unified commander of Pakistan Armed Forces. General Yaqub mobilised the entire major forces in East Pakistan, and were re-deployed in East Pakistan.
Sheikh Mujibur Rahman made a declaration of independence at Dacca on 26 March 1971. All major Awami League leaders including elected leaders of National Assembly and Provincial Assembly fled to neighbouring India and an exile government was formed headed by Mujibur Rahman. While he was in Pakistan Prison, Syed Nazrul Islam was the acting president with Tazuddin Ahmed as the prime minister. The exile government took oath on 17 April 1971 at Mujib Nagar, within East Pakistan territory of Kustia district and formally formed the government. Colonel MOG Osmani was appointed the Commander in Chief of Liberation Forces and whole East Pakistan was divided into eleven sectors headed by eleven sector commanders. All sector commanders were Bengali officers who had defected from the Pakistan Army. This started the Bangladesh Liberation War in which the freedom fighters, joined in December 1971 by 400,000 Indian soldiers, faced the Pakistani Armed Forces of 365,000 plus Paramilitary and collaborationist forces. An additional approximately 25,000 ill-equipped civilian volunteers and police forces also sided with the Pakistan Armed Forces. Bloody guerrilla warfare ensued in East Pakistan.
The Pakistan Armed Forces were unable to counter such threats. Poorly trained and inexperienced in guerrilla tactics, Pakistan Armed Forces and their assets were defeated by the Bangladesh Liberation Forces. On April 1971, Lieutenant-General Tikka Khan succeeded General Yaqub Khan as Commander of unified forces. General Tikka Khan led the massive violent and massacre campaigns in the region. He is held responsible for killing hundreds of thousands of Bengali people in East Pakistan, mostly civilians and unarmed peoples. For his role, General Tikka Khan gained the title as "Butcher of Bengal". General Khan faced an international reaction against Pakistan, and therefore, General Tikka was removed as Commander of Eastern front. He installed a civilian administration under Abdul Motaleb Malik on 31 August 1971, which proved to be ineffective. However, during the meeting, with no high officers willing to assume the command of East Pakistan, Lieutenant-General Amir Abdullah Khan Niazi volunteered for the command of East Pakistan. Inexperienced and the large magnitude of this assignment, the government sent Vice-Admiral Mohammad Shariff as second-in-command of General Niazi. Admiral Shariff served as the deputy unified commander of Pakistan Armed Forces in East Pakistan. However, General Niazi proved to be a failure and ineffective ruler. Therefore, General Niazi and Air Marshal Enamul Haque, Commander of Eastern Air Force Command (EAFC), failed to launch any operation in East Pakistan against Indian or its allies. Except Admiral Shariff who continued to press pressure on Indian Navy until the end of the conflict. Admiral Shariff's effective plans made it nearly impossible for Indian Navy to land its naval forces on the shores of East Pakistan. The Indian Navy was unable to land forces in East Pakistan and the Pakistan Navy was still offering resistance. The Indian Army, entered East Pakistan from all three directions of the province. The Indian Navy then decided to wait near the Bay of Bengal until the Army reached the shore.
The Indian Air Force dismantled the capability of Pakistan Air Force in East Pakistan. Air Marshal Enamul Haque, Commander of Eastern Air Force Command (EAFC), failed to offer any serious resistance to the actions of the Indian Air Force. For most part of the war, the IAF enjoyed complete dominance in the skies over East Pakistan.
On 16 December 1971, the Pakistan Armed Forces surrendered to the joint liberation forces of Mukti Bahini and the Indian army, headed by Lieutenant-General Jagjit Singh Arora, the General Officer Commanding-in-Chief (GOC-in-C) of the Eastern Command of the Indian Army. Lieutenant General AAK Niazi, the last unified commander of Pakistan Armed Forces' Eastern Military High Command, signed the Instrument of Surrender at about 4:31 pm. Over 93,000 personnel, including Lt. General Niazi and Admiral Shariff, were taken as prisoners of war.
On 16 December 1971, East Pakistan was liberated from Pakistan as the newly independent state of Bangladesh. The Eastern Military High Command, civilian institutions and paramilitary forces were disbanded.
In contrast to the desert and rugged mountainous terrain of West Pakistan, East Pakistan featured the world's largest delta, 700 rivers and tropical hilly jungles.
East Pakistan inherited districts from British Bengal. In 1960, Lower Tippera was renamed as Comilla. In 1969, new districts were created with Tangail separated from Mymensingh and Patuakhali from Barisal. East Pakistan's districts are listed in the following.
|11||Hill Tracts (Chakma) District|
|19||Bogra (Boghura-abad) District|
East Pakistan's divisions are listed in the following.
At the time of the Partition of British India, East Bengal had a plantation economy. The Chittagong Tea Auction was established in 1949 as the region was home to the world's largest tea plantations. The East Pakistan Stock Exchange Association was established in 1954. Many wealthy Muslim immigrants from India, Burma and former British colonies settled in East Pakistan. The Ispahani family, Africawala brothers and the Adamjee family were pioneers of industrialization in the region. Many of modern Bangladesh's leading companies were born in the East Pakistan period.
An airline founded in British Bengal, Orient Airways, launched the vital air link between East and West Pakistan with DC-3 aircraft on the Dacca-Calcutta-Delhi-Karachi route. Orient Airways later evolved into Pakistan International Airlines, whose first chairman was the East Pakistan-based industrialist Mirza Ahmad Ispahani.
By the 1950s, East Bengal surpassed West Bengal in having the largest jute industries in the world. The Adamjee Jute Mills was the largest jute processing plant in history and its location in Narayanganj was nicknamed the Dundee of the East. The Adamjees were descendants of Sir Haji Adamjee Dawood, who made his fortune in British Burma.
Natural gas was discovered in the northeastern part of East Pakistan in 1955 by the Burmah Oil Company. Industrial use of natural gas began in 1959. The Shell Oil Company and Pakistan Petroleum tapped 7 gas fields in the 1960s. The industrial seaport city of Chittagong hosted the headquarters of Burmah Eastern and Pakistan National Oil. Iran, an erstwhile leading oil producer, assisted in establishing the Eastern Refinery in Chittagong.
In 1965, Pakistan implemented the Kaptai Dam hydroelectric project in the southeastern part of East Pakistan with American assistance. It was the sole hydroelectric dam in East Pakistan. The project was controversial for displacing over 40,000 indigenous people from the area.
The centrally located metropolis Dacca witnessed significant urban growth.
Although East Pakistan had a larger population, West Pakistan dominated the divided country politically and received more money from the common budget. According to the World Bank, there was much economic discrimination against East Pakistan, including higher government spending on West Pakistan, financial transfers from East to West and the use of the East's foreign exchange surpluses to finance the West's imports.
The discrimination occurred despite fact that East Pakistan generated a major share of Pakistan's exports.
|Year||Spending on West Pakistan (in millions of Pakistani rupees)||Spending on East Pakistan (in millions of Pakistani rupees)||Amount spent on East as percentage of West|
|Source: Reports of the Advisory Panels for the Fourth Five Year Plan 1970–75, Vol. I,
published by the planning commission of Pakistan.
The annual rate of growth of the gross domestic product per capita was 4.4% in the West Pakistan versus 2.6% in East Pakistan from 1960 to 1965. Bengali politicians pushed for more autonomy, arguing that much of Pakistan's export earnings were generated in East Pakistan from the exportation of Bengali jute and tea. As late as 1960, approximately 70% of Pakistan's export earnings originated in East Pakistan, although this percentage declined as international demand for jute dwindled. By the mid-1960s, East Pakistan was accounting for less than 60% of the nation's export earnings, and by the time Bangladesh gained its independence in 1971, this percentage had dipped below 50%. In 1966, Mujib demanded that separate foreign exchange accounts be kept and that separate trade offices be opened overseas. By the mid-1960s, West Pakistan was benefiting from Ayub's "Decade of Progress" with its successful green revolution in wheat and from the expansion of markets for West Pakistani textiles, while East Pakistan's standard of living remained at an abysmally low level. Bengalis were also upset that West Pakistan, the seat of the national government, received more foreign aid.
Economists in East Pakistan argued of a "Two Economies Theory" within Pakistan itself, which was founded on the Two Nation Theory with India. The so-called Two Economies Theory suggested that East and West Pakistan had different economic features which should not be regulated by a federal government in Islamabad.
East Pakistan was home to 55% of Pakistan's population. The largest ethnic group of the province were Bengalis, who in turn were the largest ethnic group in Pakistan. Bengali Muslims formed the predominant majority, followed by Bengali Hindus, Bengali Buddhists and Bengali Christians. East Pakistan also had many tribal groups, including the Chakmas, Marmas, Tangchangyas, Garos, Manipuris, Tripuris, Santhals and Bawms. They largely followed the religions of Buddhism, Christianity and Hinduism. East Pakistan was home to immigrant Muslims from across the Indian subcontinent, including West Bengal, Bihar, Gujarat, the Northwest Frontier Province, Assam, Orissa, the Punjab and Kerala. A small Armenian and Jewish minority resided in East Pakistan.
The Asiatic Society of Pakistan was founded in Old Dacca by Ahmad Hasan Dani in 1948. The Varendra Research Museum in Rajshahi was an important center of research on the Indus Valley Civilization. The Bangla Academy was established in 1954.
At the time of partition, East Bengal had 80 cinemas. The first movie produced in East Pakistan was The Face and the Mask in 1955. Pakistan Television established its second studio in Dacca after Lahore in 1965. Runa Laila was Pakistan's first pop star and became popular in India as well. Shabnam was a leading actress from East Pakistan. Feroza Begum was a leading exponent of Bengali classical Nazrul geeti. Jasimuddin and Abbasuddin Ahmed promoted Bengali folk music. Munier Chowdhury, Syed Mujtaba Ali, Nurul Momen, Sufia Kamal and Shamsur Rahman were among the leading literary figures in East Pakistan. Several East Pakistanis were awarded the Sitara-e-Imtiaz and the Pride of Performance.
Bengalis were hugely under-represented in Pakistan's bureaucracy and military. In the federal government, only 15% of offices were occupied by East Pakistanis. Only 10% of the military were from East Pakistan. Cultural discrimination also prevailed, causing the eastern wing to forge a distinct political identity. There was a bias against Bengali culture in state media, such as a ban on broadcasts of the works of Nobel laureate Rabindranath Tagore.
Since its unification with Pakistan, the East Pakistan Army had consisted of only one infantry brigade made up of two battalions, the 1st East Bengal Regiment and the 1/14 or 3/8 Punjab Regiment in 1948. These two battalions boasted only five rifle companies between them (an infantry battalion normally had 5 companies). This weak brigade was under the command of Brigadier-General Ayub Khan (local rank Major-General – GOC of 14th Army Division), together with the East Pakistan Rifles, which was tasked with defending East Pakistan during the Indo-Pakistani War of 1947. The PAF, Marines, and the Navy had little presence in the region. Only one PAF combatant squadron, No. 14 Squadron Tail Choppers, was active in East Pakistan. This combatant squadron was commanded by Air Force Major Parvaiz Mehdi Qureshi, who later became a four-star general. The East Pakistan military personnel were trained in combat diving, demolitions, and guerrilla/anti-guerrilla tactics by the advisers from the Special Service Group (Navy) who were also charged with intelligence data collection and management cycle.
The East Pakistan Navy had only one active-duty combatant destroyer, the PNS Sylhet; one submarine Ghazi (which was repeatedly deployed in West); four gunboats, inadequate to function in deep water. The joint special operations were managed and undertaken by the Naval Special Service Group (SSG(N)) who were assisted by the army, air force and marines unit. The entire service, the Marines were deployed in East Pakistan, initially tasked with conducting exercises and combat operations in riverine areas and at near shoreline. The small directorate of Naval Intelligence (while the headquarters and personnel, facilities, and directions were coordinated by West) had vital role in directing special and reconnaissance missions, and intelligence gathering, also was charged with taking reasonable actions to slow down the Indian threat. The armed forces of East Pakistan also consisted the paramilitary organisation, the Razakars from the intelligence unit of the ISI's Covert Action Division (CAD). All of these armed forces were commanded by the unified command structure, the Eastern Military High Command, led by an officer of three-star rank equivalent.
|Tenure||Governor of East Pakistan||Political Affiliation|
|14 October 1955 – March 1956||Amiruddin Ahmad||Muslim League|
|March 1956 – 13 April 1958||A. K. Fazlul Huq||Muslim League|
|13 April 1958 – 3 May 1958||Muhammad Hamid Ali (acting)||Awami League|
|3 May 1958 – 10 October 1958||Sultanuddin Ahmad||Awami League|
|10 October 1958 – 11 April 1960||Zakir Husain||Muslim League|
|11 April 1960 – 11 May 1962||Lieutenant-General Azam Khan, PA||Military Administration|
|11 May 1962 – 25 October 1962||Ghulam Faruque||Independent|
|25 October 1962 – 23 March 1969||Abdul Monem Khan||Civil Administration|
|23 March 1969 – 25 March 1969||Mirza Nurul Huda||Civil Administration|
|25 March 1969 – 23 August 1969||Major-General Muzaffaruddin, PA||Military Administration|
|23 August 1969 – 1 September 1969||Lieutenant-General Sahabzada Yaqub Khan, PA||Military Administration|
|1 September 1969 – 7 March 1971||Vice-Admiral Syed Mohammad Ahsan, PN||Military Administration|
|7 March 1971 – 6 April 1971||Lieutenant-General Sahabzada Yaqub Khan, PA||Military Administration|
|6 April 1971 – 31 August 1971||Lieutenant-General Tikka Khan, PA||Military Administration|
|31 August 1971 – 14 December 1971||Abdul Motaleb Malik||Independent|
|14 December 1971 – 16 December 1971||Lieutenant-General Amir Abdullah Khan Niazi, PA||Military Administration|
|16 December 1971||Province of East Pakistan dissolved|
|Tenure||Chief Minister of East Pakistan||Political Party|
|August 1955 – September 1956||Abu Hussain Sarkar|
|September 1956 – March 1958||Ataur Rahman Khan||Awami League|
|March 1958||Abu Hussain Sarkar|
|March 1958 – 18 June 1958||Ataur Rahman Khan||Awami League|
|18 June 1958 – 22 June 1958||Abu Hussain Sarkar|
|22 June 1958 – 25 August 1958||Governor's Rule|
|25 August 1958 – 7 October 1958||Ataur Rahman Khan||Awami League|
|7 October 1958||Post abolished|
|16 December 1971||Province of East Pakistan dissolved|
The trauma was extremely severe in Pakistan when the news of secession of East Pakistan as Bangladesh arrived – a psychological setback, complete and humiliating defeat that shattered the prestige of Pakistan Armed Forces. The governor and martial law administrator Lieutenant-General Amir Abdullah Khan Niazi was defamed, his image was maligned and he was stripped of his honors. The people of Pakistan could not come to terms with the magnitude of defeat, and spontaneous demonstrations and mass protests erupted on the streets of major cities in (West) Pakistan. General Yahya Khan surrendered powers to Nurul Amin of Pakistan Muslim League, the first and last Vice-President and Prime minister of Pakistan.
Prime Minister Amin invited then-President Zulfikar Ali Bhutto and the Pakistan Peoples Party to take control of Pakistan. In a color ceremony where, Bhutto gave a daring speech to the nation on national television. At the ceremony, Bhutto waved his fist in the air and pledged to his nation to never again allow the surrender of his country like what happened with East Pakistan. He launched and orchestrated the large-scale atomic bomb project in 1972. In memorial of East Pakistan, the East-Pakistan diaspora in Pakistan established the East-Pakistan colony in Karachi, Sindh. In accordance, the East-Pakistani diaspora also composed patriotic tributes to Pakistan after the war; songs such as Sohni Dharti (lit. Beautiful land) and Jeevay, Jeevay Pakistan (lit. long-live, long-live Pakistan), were composed by Bengali singer Shahnaz Rahmatullah in the 1970s and 1980s.
To Western observers, the loss of East Pakistan was a blessing— but it has never been seen that way in Pakistan. In the book "Scoop! Inside Stories from the Partition to the Present", Indian politician Kuldip Nayar opined, "Losing East Pakistan and Bhutto's releasing of Mujib did not mean anything to Pakistan's policy – as if there was no liberation war." Bhutto's policy, and even today, the policy of Pakistan is that "she will continue to fight for the honor and integrity of Pakistan. East Pakistan is an inseparable and inseverable part of Pakistan".
The defeat of the Pakistan army traumatized West Pakistan and considerably dented the prestige of the armed services ... The defeat suffered in Dacca and the break-up of the country traumatized the population from top to bottom.
Thirty-four years later it may seem obvious that the loss of Bangladesh was a blessing—but it is still not seen so today in Pakistan, and it was certainly not seen so at the time ... One month after the surrender of Pakistan's army in Bangladesh [Bhutto] called a secret meeting of about seventy Pakistani scientists ... He asked them for a nuclear bomb, and they responded enthusiastically.
Few people in Karachi's Chittagong Colony can forget Dec 16, 1971 – the Fall of Dhaka
|
Cosmic voids are the vast empty spaces between filaments (the largest-scale structures in the Universe), which contain very few or no galaxies. Voids typically have a diameter of 10 to 100 megaparsecs; particularly large voids, defined by the absence of rich superclusters, are sometimes called supervoids. They have less than one-tenth of the average density of matter abundance that is considered typical for the observable Universe. They were first discovered in 1978 in a pioneering study by Stephen Gregory and Laird A. Thompson at the Kitt Peak National Observatory.
Voids are believed to have been formed by baryon acoustic oscillations in the Big Bang, collapses of mass followed by implosions of the compressed baryonic matter. Starting from initially small anisotropies from quantum fluctuations in the early Universe, the anisotropies grew larger in scale over time. Regions of higher density collapsed more rapidly under gravity, eventually resulting in the large-scale, foam-like structure or “cosmic web” of voids and galaxy filaments seen today. Voids located in high-density environments are smaller than voids situated in low-density spaces of the universe.
Voids appear to correlate with the observed temperature of the cosmic microwave background (CMB) because of the Sachs–Wolfe effect. Colder regions correlate with voids and hotter regions correlate with filaments because of gravitational redshifting. As the Sachs–Wolfe effect is only significant if the Universe is dominated by radiation or dark energy, the existence of voids is significant in providing physical evidence for dark energy.
- 1 Large-scale structure
- 2 History and discovery
- 3 Methods for finding voids
- 4 Significance of voids
- 5 See also
- 6 References
- 7 External links
The structure of our Universe can be broken down into components that can help describe the characteristics of individual regions of the cosmos. These are the main structural components of the cosmic web:
- Voids – vast, largely spherical regions with very low cosmic mean densities, up to 100 megaparsecs (Mpc) in diameter.
- Walls – the regions that contain the typical cosmic mean density of matter abundance. Walls can be further broken down into two smaller structural features:
Voids have a mean density less than tenth of the average density of the universe. This serves as a working definition even though there is no single agreed upon definition of what constitutes a void. The matter density value used for describing the cosmic mean density is usually based on a ratio of the number of galaxies per unit volume rather than the total mass of the matter contained in a unit volume.
History and discovery
Cosmic voids as a topic of study in astrophysics began in the mid 1970s when redshift surveys became more popular and led two separate teams of astrophysicists in 1978 to identifying superclusters and voids in the distribution of galaxies and Abell clusters in a large region of space. The new redshift surveys revolutionized the field of astronomy by adding depth to the two-dimensional maps of cosmological structure, which were often densely packed and overlapping, allowing for the first three-dimensional mapping of the Universe. In the redshift surveys, the depth was calculated from the individual redshifts of the galaxies due to the expansion of the Universe according to Hubble's law.
A summarized timeline of important events in the field of cosmic voids from its beginning to recent times is listed below:
- 1961 – Large scale structural features such as "second order clusters", a specific type of supercluster, were brought to the astronomical community's attention.
- 1978 – The first two papers on the topic of voids in the large scale structure were published referencing voids found in the foreground of the Coma/A1367 clusters.
- 1981 – Discovery of a large void in the Bootes region of the sky that was nearly 50 h−1 Mpc in diameter (which was later recalculated to be about 34 h−1 Mpc).
- 1983 – Computer simulations sophisticated enough to provide relatively reliable results of growth and evolution of the large scale structure emerged and yielded insight on key features of the large scale galaxy distribution.
- 1985 – Details of the supercluster and void structure of the Perseus-Pisces region were surveyed.
- 1989 – The Center for Astrophysics Redshift Survey revealed that large voids, sharp filaments, and the walls that surround them dominate the large-scale structure of the Universe.
- 1991 – The Las Campanas Redshift Survey confirmed the abundance of voids in the large-scale structure of the Universe (Kirshner et al. 1991).
- 1995 – Comparisons of optically selected galaxy surveys indicate that the same voids are found regardless of the sample selection.
- 2001 – The completed two-degree Field Galaxy Redshift Survey adds a significantly large amount of voids to the database of all known cosmic voids.
- 2009 – The Sloan Digital Sky Survey (SDSS) data combined with previous large scale surveys now provide the most complete view of the detailed structure of cosmic voids.
Methods for finding voids
There exist a number of ways for finding voids with the results of large-scale surveys of the Universe. Of the many different algorithms, virtually all fall into one of three general categories. The first class consists of void finders that try to find empty regions of space based on local galaxy density. The second class are those which try to find voids via the geometrical structures in the dark matter distribution as suggested by the galaxies. The third class is made up of those finders which identify structures dynamically by using gravitationally unstable points in the distribution of dark matter. The three most popular methods through the study of cosmic voids are listed below:
This first class method uses each galaxy in a catalog as its target and then uses the Nearest Neighbor Approximation to calculate the cosmic density in the region contained in a spherical radius determined by the distance to the third closest galaxy. El Ad & Piran introduced this method in 1997 to allow a quick and effective method for standardizing the cataloging of voids. Once the spherical cells are mined from all of the structure data, each cell is expanded until the underdensity returns to average expected wall density values. One of the helpful features of void regions is that their boundaries are very distinct and defined, with a cosmic mean density that starts at 10% in the body and quickly rises to 20% at the edge and then to 100% in the walls directly outside the edges. The remaining walls and overlapping void regions are then gridded into respectively distinct and intertwining zones of filaments, clusters, and near-empty voids. Any overlap of more than 10% with already known voids are considered to be subregions within those known voids. All voids admitted to the catalog had a minimum radius of 10 Mpc in order to ensure all identified voids were not accidentally cataloged due to sampling errors.
ZOBOV (Zone Bordering On Voidness) Algorithm
This particular second class algorithm uses a Voronoi tessellation technique and mock border particles in order to categorize regions based on a high density contrasting border with a very low amount of bias. Neyrinck introduced this algorithm in 2008 with the purpose of introducing a method that did not contain free parameters or presumed shape tessellations. Therefore, this technique can create more accurately shaped and sized void regions. Although this algorithm has some advantages in shape and size, it has been criticized often for sometimes providing loosely defined results. Since it has no free parameters, it mostly finds small and trivial voids although, the algorithm places a statistical significance on each void it finds. A physical significance parameter can be applied in order to reduce the number of trivial voids by including a minimum density to average density ratio of at least 1:5. Subvoids are also identified using this process which raises more philosophical questions on what qualifies as a void.
DIVA (DynamIcal Void Analysis) Algorithm
This third class method is drastically different from the previous two algorithms listed. The most striking aspect is that it requires a different definition of what it means to be a void. Instead of the general notion that a void is a region of space with a low cosmic mean density; a hole in the distribution of galaxies, it defines voids to be regions in which matter is escaping; which corresponds to the Dark Energy equation of state, w. Void centers are then considered to be the maximal source of the displacement field denoted as Sψ. The purpose for this change in definitions was presented by Lavaux and Wandelt in 2009 as a way to yield cosmic voids such that exact analytical calculations can be made on their dynamical and geometrical properties. This allows DIVA to heavily explore the ellipticity of voids and how they evolve in the large-scale structure, subsequently leading to the classification of three distinct types of voids. These three morphological classes are True voids, Pancake voids, and Filament voids. Another notable quality is that even though DIVA also contains selection function bias just as first class methods do, DIVA is devised such that this bias can be precisely calibrated, leading to much more reliable results. Multiple shortfalls of this Lagrangian-Eulerian hybrid approach exist. One example is that the resulting voids from this method are intrinsically different than those found by other methods, which makes an all-data points inclusive comparison between results of differing algorithms very difficult.
Once an algorithm is presented to find what it deems to be cosmic voids, it is crucial that its findings approximately match what is expected by the current simulations and models of large-scale structure. In order to perform this, the number, size, and proportion as well as other features of voids found by the algorithm are then checked by placing mock data through a Smoothed Particle Hydrodynamic Halo simulation, ΛCDM model, or other reliable simulator. An algorithm is much more robust if its data is in concordance with the results of these simulations for a range of input criterion (Pan et al. 2011).
Significance of voids
Since so much time is being dedicated to the study of voids, the question of why they matter to the scientific community arises. The applications of voids is broad and relatively impressive, ranging from shedding light on the current understanding of dark energy, to refining and constraining cosmological evolution models. Some popular applications are mentioned in detail below.
Dark energy equation of state
Voids act as bubbles in the Universe that are sensitive to background cosmological changes. This means that the evolution of a void's shape is in part the result of the expansion of the Universe. Since this acceleration is believed to be caused by dark energy, studying the changes of a void's shape over a period of time can further refine the Quintessence + Cold Dark Matter (QCDM) model and provide a more accurate dark energy equation of state.
Galactic formation and evolution models
Cosmic voids contain a mix of galaxies and matter that is slightly different than other regions in the Universe. This unique mix supports the biased galaxy formation picture that is predicted in Gaussian adiabatic cold dark matter models. This phenomenon provides an opportunity to modify the morphology-density correlation that holds discrepancies with these voids. Such observations like the morphology-density correlation can help uncover new facets about how galaxies form and evolve on the large scale. On a more local scale, galaxies that reside in voids have differing morphological and spectral properties than those that are located in the walls. One feature that has been found is that voids have been shown to contain a significantly higher fraction of starburst galaxies of young, hot stars when compared to samples of galaxies in walls.
Anomalies in anisotropies
Cold spots in the cosmic microwave background, such as the WMAP cold spot found by Wilkinson Microwave Anisotropy Probe, could possibly be explained by an extremely large cosmic void that has a radius of ~120 Mpc, as long as the late integrated Sachs-Wolfe effect was accounted for in the possible solution. Anomalies in CMB screenings are now being potentially explained through the existence of large voids located down the line-of-sight in which the cold spots lie.
Accelerating expansion of the Universe
Although dark energy is currently the most popular explanation for the acceleration in the expansion of the Universe, another theory elaborates on the possibility of our galaxy being part of a very large, not-so-underdense, cosmic void. According to this theory, such an environment could naively lead to the demand for dark energy to solve the problem with the observed acceleration. As more data has been released on this topic the chances of it being a realistic solution in place of the current ΛCDM interpretation has been largely diminished but not all together abandoned.
Void regions often seem to adhere to cosmological parameters which differ from those of the known universe. It is because of this unique feature that cosmic voids make for great laboratories to study the effects that gravitational clustering and growth rates have on local galaxies and structure when the cosmological parameters have different values from the outside universe. Due to the observation that larger voids predominantly remain in a linear regime, with most structures within exhibiting spherical symmetry in the underdense environment; that is, the underdensity leads to near-negligible particle-particle gravitational interactions that would otherwise occur in a region of normal galactic density. Testing models for voids can be performed with very high accuracy. The cosmological parameters that differ in these voids are Ωm, ΩΛ, and H0.
- Freedman, R.A., & Kaufmann III, W.J. (2008). Stars and galaxies: Universe. New York City: W.H. Freeman and Company.
- U. Lindner; J. Einasto; M. Einasto; W. Freudling; K. Fricke; E. Tago (1995). "The structure of supervoids. I. Void hierarchy in the Northern Local Supervoid". Astron. Astrophys. 301: 329. arXiv:. Bibcode:1995A&A...301..329L.
- Granett, B. R.; Neyrinck, M. C.; Szapudi, I. (2008). "An Imprint of Superstructures on the Microwave Background due to the Integrated Sachs-Wolfe Effect". Astrophysical Journal. 683 (2): L99–L102. arXiv:. Bibcode:2008ApJ...683L..99G. doi:10.1086/591670.
- Ryden, Barbara Sue; Peterson, Bradley M. (2010-01-01). Foundations of Astrophysics (International ed.). Addison-Wesley. p. 522. ISBN 9780321595584.
- Carroll, Bradley W.; Ostlie, Dale A. (2013-07-23). An Introduction to Modern Astrophysics (International ed.). Pearson. p. 1171. ISBN 9781292022932.
- Pan, Danny C.; Michael S. Vogeley; Fiona Hoyle; Yun-Young Choi; Changbom Park (23 Mar 2011). "Cosmic Voids in Sloan Digital Sky Survey Data Release 7". arXiv: [astro-ph.CO].
- Neyrinck, Mark C. (29 Feb 2008). "ZOBOV: a parameter-free void-finding algorithm". Monthly Notices of the Royal Astronomical Society. ArXiv. 386 (4): 2101–2109. arXiv:. Bibcode:2008MNRAS.386.2101N. doi:10.1111/j.1365-2966.2008.13180.x.
- Gregory, S. A.; L. A. Thompson (1978). "The Coma/A1367 supercluster and its environs". The Astrophysical Journal. 222: 784. Bibcode:1978ApJ...222..784G. doi:10.1086/156198. ISSN 0004-637X.
- Jõeveer, M.; Einasto, J. (1978). M.S. Longair; J. Einasto, eds. The Large Scale Structure of the Universe. Dordrecht: Reidel. p. 241.
- Rex, Andrew F.; Bennett, Jeffrey O.; Donahue, Megan; Schneider, Nicholas; Voit, Mark (1998-12-01). The Cosmic Perspective. Pearson College Division. p. 602. ISBN 978-0-201-47399-5. Retrieved 4 May 2014.
- Abell, George O. (1961). "Evidence regarding second-order clustering of galaxies and interactions between clusters of galaxies". The Astronomical Journal. 66: 607. Bibcode:1961AJ.....66..607A. doi:10.1086/108472. ISSN 0004-6256.
- Joeveer, Einasto and Tago 1978, Dordrecht, N/A, 241.
- Kirshner, R. P.; Oemler, A., Jr.; Schechter, P. L.; Shectman, S. A. (1981). "A million cubic megaparsec void in Bootes". The Astrophysical Journal. 248: L57. Bibcode:1981ApJ...248L..57K. doi:10.1086/183623. ISSN 0004-637X.
- Kirshner, Robert P.; Oemler, Augustus, Jr.; Schechter, Paul L.; Shectman, Stephen A. (1987). "A survey of the Bootes void". The Astrophysical Journal. 314: 493. Bibcode:1987ApJ...314..493K. doi:10.1086/165080. ISSN 0004-637X.
- Merlott, A. L. (November 1983). "Clustering velocities in the adiabatic picture of galaxy formation". Monthly Notices of the Royal Astronomical Society. 205: 637–641. Bibcode:1983MNRAS.205..637M. doi:10.1093/mnras/205.3.637. ISSN 0035-8711.
- Frenk, C. S.; S. D. M. White; M. Davis (1983). "Nonlinear evolution of large-scale structure in the universe". The Astrophysical Journal. 271: 417. Bibcode:1983ApJ...271..417F. doi:10.1086/161209. ISSN 0004-637X.
- Giovanelli, R.; M. P. Haynes (1985). "A 21 CM survey of the Pisces-Perseus supercluster. I – The declination zone +27.5 to +33.5 degrees". The Astronomical Journal. 90: 2445. Bibcode:1985AJ.....90.2445G. doi:10.1086/113949. ISSN 0004-6256.
- Geller, M. J.; J. P. Huchra (1989). "Mapping the Universe". Science. 246 (4932): 897–903. Bibcode:1989Sci...246..897G. doi:10.1126/science.246.4932.897. ISSN 0036-8075. PMID 17812575.
- Kirshner, 1991, Physical Cosmology, 2, 595.
- Fisher, Karl; Huchra, John; Strauss, Michael; Davis, Marc; Yahil, Amos; Schlegel, David (1995). "The IRAS 1.2 Jy Survey: Redshift Data". The Astrophysical Journal Supplement Series. 100: 69. arXiv:. Bibcode:1995ApJS..100...69F. doi:10.1086/192208.
- Colless, Matthew; Dalton, G. B.; Maddox, S. J.; Sutherland, W. J.; Norberg, P.; Cole, S.; Bland-Hawthorn, J.; Bridges, T. J.; Cannon, R. D.; Collins, C. A.; J Couch, W.; Cross, N. G. J.; Deeley, K.; DePropris, R.; Driver, S. P.; Efstathiou, G.; Ellis, R. S.; Frenk, C. S.; Glazebrook, K.; Jackson, C. A.; Lahav, O.; Lewis, I. J.; Lumsden, S. L.; Madgwick, D. S.; Peacock, J. A.; Peterson, B. A.; Price, I. A.; Seaborne, M.; Taylor, K. (2001). "The 2dF Galaxy Redshift Survey: Spectra and redshifts". Monthly Notices of the Royal Astronomical Society. 328 (4): 1039–1063. arXiv:. Bibcode:2001MNRAS.328.1039C. doi:10.1046/j.1365-8711.2001.04902.x.
- Abazajian, K.; for the Sloan Digital Sky Survey; Agüeros, Marcel A.; Allam, Sahar S.; Prieto, Carlos Allende; An, Deokkeun; Anderson, Kurt S. J.; Anderson, Scott F.; Annis, James; Bahcall, Neta A.; Bailer-Jones, C. A. L.; Barentine, J. C.; Bassett, Bruce A.; Becker, Andrew C.; Beers, Timothy C.; Bell, Eric F.; Belokurov, Vasily; Berlind, Andreas A.; Berman, Eileen F.; Bernardi, Mariangela; Bickerton, Steven J.; Bizyaev, Dmitry; Blakeslee, John P.; Blanton, Michael R.; Bochanski, John J.; Boroski, William N.; Brewington, Howard J.; Brinchmann, Jarle; Brinkmann, J.; et al. (2008). "The Seventh Data Release of the Sloan Digital Sky Survey". The Astrophysical Journal Supplement Series. 182 (2): 543–558. arXiv:. Bibcode:2009ApJS..182..543A. doi:10.1088/0067-0049/182/2/543.
- Thompson, Laird A.; Gregory, Stephen A. (2011). "An Historical View: The Discovery of Voids in the Galaxy Distribution". arXiv: [physics.hist-ph].
- Lavaux, Guilhem; Wandelt, Benjamin D. (2009). "Precision cosmology with voids: Definition, methods, dynamics". Monthly Notices of the Royal Astronomical Society. 403 (3): 403–1408. arXiv:. Bibcode:2010MNRAS.403.1392L. doi:10.1111/j.1365-2966.2010.16197.x.
- Hoyle, Fiona; Vogeley, Michael S. (2001). "Voids in the PSCz Survey and the Updated Zwicky Catalog". The Astrophysical Journal. 566 (2): 641–651. arXiv:. Bibcode:2002ApJ...566..641H. doi:10.1086/338340.
- Colberg, Joerg M.; Sheth, Ravi K.; Diaferio, Antonaldo; Gao, Liang; Yoshida, Naoki (2004). "Voids in a $Λ$CDM Universe". Monthly Notices of the Royal Astronomical Society. 360 (2005): 216–226. arXiv:. Bibcode:2005MNRAS.360..216C. doi:10.1111/j.1365-2966.2005.09064.x.
- Hahn, Oliver; Porciani, Cristiano; Marcella Carollo, C.; Dekel, Avishai (2006). "Properties of Dark Matter Haloes in Clusters, Filaments, Sheets and Voids". Monthly Notices of the Royal Astronomical Society. 375 (2): 489–499. arXiv:. Bibcode:2007MNRAS.375..489H. doi:10.1111/j.1365-2966.2006.11318.x.
- Pan, Danny C.; Vogeley, Michael S.; Hoyle, Fiona; Choi, Yun-Young; Park, Changbom (2011). "Cosmic Voids in Sloan Digital Sky Survey Data Release 7". arXiv: [astro-ph.CO].
- El-Ad, Hagai; Piran, Tsvi (1997). "Voids in the Large-Scale Structure". The Astrophysical Journal. 491 (2): 421–435. arXiv:. Bibcode:1997ApJ...491..421E. doi:10.1086/304973.
- Sutter, P. M.; Lavaux, Guilhem; Wandelt, Benjamin D.; Weinberg, David H. (2013). "A response to arXiv:1310.2791: A self-consistent public catalogue of voids and superclusters in the SDSS Data Release 7 galaxy surveys". arXiv: [astro-ph.CO].
- Neyrinck, Mark C. (2007). "ZOBOV: A parameter-free void-finding algorithm". Monthly Notices of the Royal Astronomical Society. 386 (4): 2101–2109. arXiv:. Bibcode:2008MNRAS.386.2101N. doi:10.1111/j.1365-2966.2008.13180.x.
- Pan, 2011, Dissertation Abstracts International, 72, 77.
- Lee, Jounghun; Park, Daeseong (2007). "Constraining the Dark Energy Equation of State with Cosmic Voids". The Astrophysical Journal. 696: L10–L12. arXiv:. Bibcode:2009ApJ...696L..10L. doi:10.1088/0004-637X/696/1/L10.
- Peebles, P. J. E. (2001). "The Void Phenomenon". The Astrophysical Journal. 557 (2): 495–504. arXiv:. Bibcode:2001ApJ...557..495P. doi:10.1086/322254.
- Constantin, Anca; Hoyle, Fiona; Vogeley, Michael S. (2007). "Active Galactic Nuclei in Void Regions". The Astrophysical Journal. 673 (2): 715–729. arXiv:. Bibcode:2008ApJ...673..715C. doi:10.1086/524310.
- Rudnick, Lawrence; Brown, Shea; Williams, Liliya R. (2007). "Extragalactic Radio Sources and the WMAP Cold Spot". The Astrophysical Journal. 671: 40–44. arXiv:. Bibcode:2007ApJ...671...40R. doi:10.1086/522222.
- Alexander, Stephon; Biswas, Tirthabir; Notari, Alessio; Vaid, Deepak (2007). "Local Void vs Dark Energy: Confrontation with WMAP and Type Ia Supernovae". Journal of Cosmology and Astroparticle Physics. 2009 (9): 025. arXiv:. Bibcode:2009JCAP...09..025A. doi:10.1088/1475-7516/2009/09/025.
- Goldberg, David M.; Vogeley, Michael S. (12 Dec 2003). "Simulating Voids". The Astrophysical Journal. 605: 1–6. arXiv:. Bibcode:2004ApJ...605....1G. doi:10.1086/382143.
- Animated views of voids and their distribution from Hume Feldman with Sergei Shandarin, Dept. Physics and Astronomy, University of Kansas, Lawrence, KS, USA.
- Visualization of Nearby Large-Scale Structures Fairall, A. P., Paverd, W. R., & Ashley, R. P.
- Hierarchical structure and dynamics of voids arXiv:1203.0248
|
The moment of inertia is a measure of an object's tendency to resist rotational changes. This quantity depends on the mass density distribution of the object or objects in question, along with the length of the moment arm, or the vector that runs from the object's centre of mass to the major axis of rotation. The following steps will describe how to calculate the inertia tensor, or the matrix-like object that defines all inertial elements which account for the on- and off-axis combinations of inertia contributions from the centre of mass frame. This procedure, which requires some knowledge of calculus and linear algebra, is most helpful for students in an intermediate-level physics course.
Orient your object in the centre of mass frame. Its centre of mass should be located at the (0,0,0) point in the xyz plane. Identify the generalised position vector ri (r = [xi, yi, zi]'). This will represent the position of an infinitesimal piece of the object outside of the origin, which we will use to generalize the calculation. Define the variable "mi" as an infinitesimal mass within the object or system of objects; this is located at the point referenced by the position vector.
Write down the inertia tensor. This will look like a matrix of the following rows, written from top to bottom: [Ixx, Ixy, Ixz], [Iyx, Iyy, Iyz], [Izx, Izy, Izz] .
Substitute the following relationships for each element of the inertia tensor: Ixy = Iyx = -sum[mi(xi)(yi)] ; Ixz = Izx = -sum[mi(xi)(zi)] ; Iyz = Izy = -sum[mi(yi)(zi)] ; Ixx = sum[mi(y^2 + z^2)] ; Iyy = sum[mi(x^2 + z^2)] ; Izz = sum[mi(x^2 + y^2)] .
Write the relationship for the change in mass as one travels along the moment arm outward from the centre of mass for each element in the tensor. You should end up with an infinitesimal change in dimensions of length or angular units. Substitute this quantity into each element of the moment of inertia tensor for the "mi" variable. Integrate all elements of the moment of inertia tensor over this change in mass to get the specific equation for each tensor element as it pertains to the problem at hand. These axes upon which the moment arm depends should be apparent from the moment arm vector that was identified in the first step.
Substitute all known variables into the tensor elements to get the desired quantities. You should now have a tensor of inertial elements that accounts for every combination of axes, including the principle moments and products of inertia. The elements commonly called the "moments of inertia" in this tensor are the Ixx, Iyy and Izz elements.
|
|History of Japan|
|Part of a series on the|
The very first human habitation in the Japanese archipelago has been traced to prehistoric times around 30,000 BC. The Jōmon period, named after its "cord-marked" pottery, was followed by the Yayoi in the first millennium BC when new technologies were introduced from continental Asia. During this period, the first known written reference to Japan was recorded in the Chinese Book of Han in the first century AD. Between the fourth century and the ninth century, Japan's many kingdoms and tribes gradually came to be unified under a centralized government, nominally controlled by the Emperor. This imperial dynasty continues to reign over Japan. In 794, a new imperial capital was established at Heian-kyō (modern Kyoto), marking the beginning of the Heian period, which lasted until 1185. The Heian period is considered a golden age of classical Japanese culture. Japanese religious life from this time and onwards was a mix of native Shinto practices and Buddhism.
Over the following centuries, the power of the Emperor and the imperial court gradually declined, passing first to great clans of civilian aristocrats – most notably the Fujiwara – and then to the military clans and their armies of samurai. The Minamoto clan under Minamoto no Yoritomo emerged victorious from the Genpei War of 1180–85, defeating their rival military clan, the Taira. After seizing power, Yoritomo set up his capital in Kamakura and took the title of shōgun. In 1274 and 1281, the Kamakura shogunate withstood two Mongol invasions, but in 1333 it was toppled by a rival claimant to the shogunate, ushering in the Muromachi period. During the Muromachi period regional warlords called daimyōs grew in power at the expense of the shōgun. Eventually, Japan descended into a period of civil war. Over the course of the late sixteenth century, Japan was reunified under the leadership of the prominent daimyō Oda Nobunaga and his successor Toyotomi Hideyoshi. After Hideyoshi's death in 1598, Tokugawa Ieyasu came to power and was appointed shōgun by the Emperor. The Tokugawa shogunate, which governed from Edo (modern Tokyo), presided over a prosperous and peaceful era known as the Edo period (1600–1868). The Tokugawa shogunate imposed a strict class system on Japanese society and cut off almost all contact with the outside world.
Portugal and Japan started their first affiliation in 1543, making the Portuguese the first Europeans to reach Japan by landing in the southern archipelago of Japan. They had a significant impact on Japan, even in this initial limited interaction, introducing firearms to Japanese warfare. The Netherlands was the first to establish trade relations with Japan, Japanese and Dutch relations are dating back to 1609. The American Perry Expedition in 1853–54 more completely ended Japan's seclusion; this contributed to the fall of the shogunate and the return of power to the Emperor during the Boshin War in 1868. The new national leadership of the following Meiji period transformed the isolated feudal island country into an empire that closely followed Western models and became a great power. Although democracy developed and modern civilian culture prospered during the Taishō period (1912–26), Japan's powerful military had great autonomy and overruled Japan's civilian leaders in the 1920s and 1930s. The military invaded Manchuria in 1931, and from 1937 the conflict escalated into a prolonged war with China. Japan's attack on Pearl Harbor in December 1941 led to war with the United States and its allies. Japan's forces soon became overextended, but the military held out in spite of Allied air attacks that inflicted severe damage on population centers. Emperor Hirohito announced Japan's unconditional surrender on August 15, 1945, following the atomic bombings of Hiroshima and Nagasaki and the Soviet invasion of Manchuria.
The Allies occupied Japan until 1952, during which a new constitution was enacted in 1947 that transformed Japan into a constitutional monarchy. After 1955, Japan enjoyed very high economic growth, and became a world economic powerhouse. Since the 1990s, the Lost Decade had been a major issue, such as the 1995 Great Kobe-Osaka earthquake and Tokyo subway sarin attack. In 2004, Japan sent a military force as part of the international coalition forces during the Iraq War.
On Friday, March 11, 2011, at 2:46 p.m. (UTC+9), Japan suffered from a powerful magnitude 9.0 earthquake and tsunami, one of the most powerful earthquakes recorded. The earthquake killed almost 20,000 people, affected places in the three regions of Tohoku, Chubu, and Kanto in the northeast of Honshu, including the Tokyo area, had massive economic ramifications, and caused the serious Fukushima nuclear power disaster.
- 1 Geographical background
- 2 Overview
- 3 Prehistoric and ancient Japan
- 4 Classical Japan
- 5 Medieval Japan
- 6 Early modern Japan
- 7 Modern Japan
- 7.1 Meiji period (1868–1912)
- 7.2 Taishō period (1912–1926)
- 7.3 Shōwa period (1926–1989)
- 7.4 Heisei period (1989–2019)
- 7.5 Reiwa period (2019–present)
- 8 Social conditions
- 9 See also
- 10 Notes
- 11 References
- 12 Books cited
- 13 Historiography
The Japanese archipelago (日本列島, Nihon Rettō) is a group of 6,852 islands that form the country of Japan. It extends over 3,000 km (1,900 mi) from the Sea of Okhotsk in the north to the East China Sea and the Philippine Sea in the south along the east coast of the Asian continent.
The size of Japan is 377,974 km2 (145,937 sq mi) in 2018. This makes it 1.56 times bigger than the United Kingdom 242,495 km2 (93,628 sq mi). Japan is the largest island country in East Asia and the fourth largest island country in the world. Japan has the sixth longest coastline (29,751 km (18,486 mi)) and the eighth largest Exclusive Economic Zone of 4,470,000 km2 (1,730,000 sq mi).
The islands of Japan were created by tectonic plate movements over several hundred millions of years from the mid-Silurian (443.8 Mya) to the Pleistocene (11,700 years ago). Japan is located in the northwestern Ring of Fire on multiple tectonic plates. East of the Japanese archipelago are three oceanic trenches. The Japan Trench is created as the oceanic Pacific Plate subducts beneath the continental Okhotsk Plate. The continuous subduction process causes frequent earthquakes, tsunami and stratovolcanoes. The islands are also affected by typhoons. The subduction plates have pulled the Japanese archipelago eastward, created the Sea of Japan and separated it from the Asian continent by back-arc spreading 15 million years ago.
Japan has 108 active volcanoes, making up 10 percent of all active volcanoes in the world. The stratovolcanoes are near the subduction zones of the tectonic plates. During the twentieth century several new volcanoes emerged, including Shōwa-shinzan on Hokkaido and Myōjin-shō off the Bayonnaise Rocks in the Pacific. As many as 1,500 earthquakes are recorded yearly, and magnitudes of 4 to 7 are common.
About 73% of Japan is mountainous, with a mountain range running through each of the main islands. Japan's highest mountain is Mount Fuji, with an elevation of 3,776 m (12,388 ft). Japan's forest cover rate is 68.55% since the mountains are heavily forested. The only other developed nations with such a high forest cover percentage are Finland and Sweden. The population is clustered in urban areas on the coast, plains and valleys. Japan is the 2nd most populous island country with a population of around 126 million, in 2017.
There is a great variety to the geographical features and weather patterns, with a wet season which takes place in early summer for most areas. The Volcanic soil that washes along the 13% of the area that makes up the coastal plains provides fertile land. The mainly temperate climate allows long growing seasons with a diversity of flora and fauna. This provides rich resources to support a high population. The climate of the Japanese archipelago varies from humid continental in the north (Hokkaido) to humid subtropical and tropical rainforest in the south (Okinawa Prefecture).
The remote location makes Japan relatively secure against foreign invasions. The Japanese archipelago is surrounded by vast seas and it has rugged, mountainous terrain with steep rivers. Kyushu is closest to the southernmost point of the Korean peninsula with a distance of 190 km (120 mi). Throughout history, Japan was never fully invaded or conquered by foreigners. The Mongols tried to invade Japan twice and were repelled in 1274 and 1281.
A commonly accepted periodization of Japanese history:
|30,000–10,000 BC||Japanese Paleolithic||unknown|
|14,000–1000 BC||Ancient Japan||Jōmon|
|1000 BC – 300 AD||Yayoi|
|1185–1333||Medieval Japan||Kamakura||Kamakura shogunate|
|1333–1336||Kenmu Restoration||Imperial government|
|1336–1392||Muromachi||Nanboku-chō period||Ashikaga shogunate|
|1467–1573||Sengoku period||Ashikaga shogunate and sengoku daimyōs|
|1573–1603||Azuchi–Momoyama||Oda Nobunaga, Toyotomi Hideyoshi and Tokugawa Ieyasu|
|1603–1868||Early Modern Japan||Edo||Tokugawa period||Tokugawa shogunate|
|1868–1912||Modern Japan||Meiji||Pre-war||Imperial government|
|1945–1952||Contemporary Japan||Shōwa (Occupied Post-war)||Post-war||GHQ/SCAP|
|1952–1989||Shōwa (Post-occupation)||Parliamentary democracy|
Prehistoric and ancient Japan
Land bridges, during glacial periods when the world sea level is lower, have periodically linked the Japanese archipelago to the Asian continent via Sakhalin Island in the north and via the Ryukyu Islands and Taiwan in the south since the beginning of the current Quaternary glaciation 2.58 million years ago. There may also have been a land bridge to Korea in the southwest, though not in the 125,000 years or so since the start of the last interglacial. The Korea Strait was, however, quite narrow at the Last Glacial Maximum from 25,000 to 20,000 years BP. The earliest firm evidence of human habitation is of early Upper Paleolithic hunter-gatherers from 40,000 years ago, when Japan was separated from the continent. Edge-ground axes dating to 32–38,000 years ago found in 224 sites in Honshu and Kyushu are unlike anything found in neighboring areas of continental Asia, and have been proposed as evidence for the first Homo sapiens in Japan; watercraft appear to have been in use in this period. Radiocarbon dating has shown that the earliest fossils in Japan date back to around 32,000–27,000 years ago; for example in the case of Yamashita Cave 32,100 ± 1,000 BP, in Sakitari Cave cal 31,000–29,000 BP, in Shiraho Saonetabaru Cave c. 27,000 BP among others.
The Jōmon period of prehistoric Japan spans from about 12,000 BC (in some cases dates as early as 14,500 BC are given) to about 1,000 BC. Japan was inhabited by a hunter-gatherer culture that reached a considerable degree of sedentism and cultural complexity. The name "cord-marked" was first applied by the American scholar Edward S. Morse who discovered shards of pottery in 1877 and subsequently translated it into Japanese as jōmon. The pottery style characteristic of the first phases of Jōmon culture was decorated by impressing cords into the surface of wet clay. The Jōmon pottery is generally accepted to be among the oldest in East Asia and the world.
The Yayoi people brought new technologies and modes of living to the Japanese archipelago. It took over from the Jōmon culture and spread from northern Kyushu. Yayoi culture quickly spread to the main island of Honshū, mixing with native Jōmon culture. The date of the change was until recently thought to be around 400 BC, but radio-carbon evidence suggests a date up to 500 years earlier, between 1,000 and 800 BC. A recent study that used accelerator mass spectrometry to analyze carbonized remains on pottery and wooden stakes, suggests that they dated back to 900–800 BC, 500 years earlier than previously believed.
The period was named after a district in Tokyo where a new, unembellished style of pottery was discovered in 1884. Though hunting and foraging continued, the Yayoi period brought a new reliance on agriculture. Bronze and iron weapons and tools were imported from China and Korea; such tools were later also produced in Japan. The Yayoi period also saw the introduction of weaving and silk production, glassmaking and new techniques of woodworking.
The Yayoi technologies originated on the Asian mainland. There is debate among scholars as to what extent their spread was accomplished by means of migration or simply a diffusion of ideas, or a combination of both. The migration theory is supported by genetic and linguistic studies. Hanihara Kazurō has suggested that the annual immigrant influx from the continent ranged from 350 to 3,000. Modern Japanese are genetically more similar to the Yayoi people than to the Jōmon people—though more so in southern Japan than in the north—whereas the Ainu bear significant resemblance to the Jōmon people. It took time for the Yayoi people and their descendants to displace and intermix with the Jōmon, who continued to exist in northern Honshu until the eighth century AD. A 2017 study on ancient Jōmon aDNA from the Sanganji shell mound in Tōhoku estimated that the modern mainland Japanese inherited <20% of Jōmon peoples' genomes, and their genetic admixture resulted of the indigenous Jōmon people, the Yayoi people and later migrants during and after the Yayoi period. A more recent estimation (2019) suggests about 10% Jōmon ancestry in modern Japanese (Yamato).
The population of Japan began to increase rapidly, perhaps with a 10-fold rise over the Jōmon. Calculations of the population size have varied from 1.5 to 4.5 million by the end of the Yayoi. Skeletal remains from the late Jōmon period reveal a deterioration in already poor standards of health and nutrition, in contrast to Yayoi archaeological sites where there are large structures suggestive of grain storehouses. This change was accompanied by an increase in both the stratification of society and tribal warfare, indicated by segregated gravesites and military fortifications. The Yoshinogari site, a large moated village of the period, began to be excavated by archaeologists in the late-1980s.
During the Yayoi period, the Yayoi tribes gradually coalesced into a number of kingdoms. The earliest written work of history to mention Japan, the Book of Han completed around 82 AD, states that Japan, referred to as Wa, was divided into one hundred kingdoms. A later Chinese work of history, the Wei Zhi, states that by 240 AD, one powerful kingdom had gained ascendancy over the others. According to the Wei Zhi, this kingdom was called Yamatai, though modern historians continue to debate its location and other aspects of its depiction in the Wei Zhi. Yamatai was said to have been ruled by the female monarch Himiko.
Kofun period (c. 250–538)
During the subsequent Kofun period, most of Japan gradually unified under a single kingdom. The symbol of the growing power of Japan's new leaders was the kofun burial mounds they constructed from around AD 250 onwards. Many were of massive scales, such as the Daisen Kofun, a 500 m-long keyhole-shaped burial mound that took huge teams of laborers fifteen years to complete. It is commonly accepted that the tomb was built for the late Emperor Nintoku. The kofun were often surrounded by and filled with numerous haniwa clay sculptures, often in the shape of warriors and horses. The Nintoku-tennō-ryō Kofun, is one grave mound which is a 486 metres (1,594 ft) long tumulus enclosed by a moat and a fortification which is 840 metres (2,760 ft) in length; this is regarded as the largest such mound in the world.
The center of the unified state was Yamato in the Kinai region of central Japan. The rulers of the Yamato state were a hereditary line of Emperors who still reign as the world's longest dynasty. The rulers of the Yamato extended their power across Japan through military conquest, but their preferred method of expansion was to convince local leaders to accept their authority in exchange for positions of influence in the government. Many of the powerful local clans who joined the Yamato state became known as the uji.
These leaders sought and received formal diplomatic recognition from China, and Chinese accounts record five successive such leaders as the Five kings of Wa. Craftsmen and scholars from China and the Three Kingdoms of Korea played an important role in transmitting continental technologies and administrative skills to Japan during this period.
The Asuka period began in 538 AD with the introduction of the Buddhist religion from the Korean kingdom of Baekje. Since then, Buddhism has coexisted with Japan's native Shinto religion, in what is today known as Shinbutsu-shūgō. The period draws its name from the de facto imperial capital, Asuka, in the Kinai region.
At the end of the Kofun period and during the Asuka period, Iranian influence reached its maximum. It is suggested that some Scythians and Persians settled in Japan and were granted titles of feudal lords by the emperor. In turn, they had some influence on the aristocratic system and art of Japan. Persia and Japan were connected through the Silk Road and established good diplomatic relations.
The Buddhist Soga clan took over the government in 587 and controlled Japan from behind the scenes for nearly sixty years. Prince Shōtoku, an advocate of Buddhism and of the Soga cause, who was of partial Soga descent, served as regent and de facto leader of Japan from 594 to 622. Shōtoku authored the Seventeen-article constitution, a Confucian-inspired code of conduct for officials and citizens, and attempted to introduce a merit-based civil service called the Cap and Rank System. In 607, Shōtoku offered a subtle insult to China by opening his letter with the phrase, "The sovereign of the land where the sun rises is sending this mail to the sovereign of the land where the sun sets" as seen in the kanji characters for Japan (Nippon) thus indicating that sun's full strength originates with Japan and China receives the waning sun. By 670 a variant of this expression, Nihon, established itself as the official name of the nation, which has persisted to this day.
In 645, the Soga clan were overthrown in a coup launched by Prince Naka no Ōe and Fujiwara no Kamatari, the founder of the Fujiwara clan. Their government devised and implemented the far-reaching Taika Reforms. The Reform began with land reform, based on Confucian ideas and philosophies from China. It nationalized all land in Japan, to be distributed equally among cultivators, and ordered the compilation of a household registry as the basis for a new system of taxation. The true aim of the reforms was to bring about greater centralization and to enhance the power of the imperial court, which was also based on the governmental structure of China. Envoys and students were dispatched to China to learn seemingly everything from the Chinese writing system, literature, religion, and architecture, to even dietary habits at this time. Even today, the impact of the reforms can still be seen in Japanese cultural life. After the reforms, the Jinshin War of 672, a bloody conflict between Prince Ōama and his nephew Prince Ōtomo, two rivals to the throne, became a major catalyst for further administrative reforms. These reforms culminated with the promulgation of the Taihō Code, which consolidated existing statutes and established the structure of the central government and its subordinate local governments. These legal reforms created the ritsuryō state, a system of Chinese-style centralized government that remained in place for half a millennium.
The art of the Asuka period embodies the themes of Buddhist art. One of the most famous works is the Buddhist temple of Horyu-ji, commissioned by Prince Shōtoku in the early 7th century and completed in 607 AD. It is the oldest wooden structure in the world. This represents the beginning of the presence of Buddhism in Japan.
Nara period (710–794)
In 710, the government constructed a grandiose new capital at Heijō-kyō (modern Nara) modeled on Chang'an, the capital of the Chinese Tang dynasty. During this period, the first two books produced in Japan appeared: the Kojiki and Nihon Shoki, which contain chronicles of legendary accounts of early Japan and its creation myth, which describes the imperial line as descendants of the gods. The latter half of the eighth century saw the compilation of the Man'yōshū, widely considered the finest collection of Japanese poetry.
During this period, Japan suffered a series of natural disasters, including wildfires, droughts, famines, and outbreaks of disease, such as a smallpox epidemic in 735–737 that killed over a quarter of the population. Emperor Shōmu (r. 724–49) feared his lack of piousness had caused the trouble and so increased the government's promotion of Buddhism, including the construction of the temple Tōdai-ji. The funds to build this temple were raised in part by the influential Buddhist monk Gyōki, and once completed it was used by the Chinese monk Ganjin as an ordination site. Japan nevertheless entered a phase of population decline that continued well into the following Heian period.
The Tōdai-ji is a Buddhist temple complex that was sponsored by the Imperial Court. It was once part of the powerful Seven Great Temples, located in the ancient former capital Nara. Todai-ji was opened in the year 752 CE. Its Great Buddha Hall (大仏殿 Daibutsuden) houses the world's largest bronze statue of the Buddha Vairocana, called the Daibutsu (大仏) in Japan.
Heian period (794–1185)
In 784, the capital moved briefly to Nagaoka-kyō, then again in 794 to Heian-kyō (modern Kyoto), which remained the capital until 1868. Political power within the court soon passed to the Fujiwara clan, a family of court nobles who grew increasingly close to the imperial family through intermarriage.
Between 812–814 CE, a smallpox epidemic killed almost half of the Japanese population.
In 858, Fujiwara no Yoshifusa had himself declared sesshō ("regent") to the underage Emperor. His son Fujiwara no Mototsune created the office of kampaku, which could rule in the place of an adult reigning Emperor. Fujiwara no Michinaga, an exceptional statesman who became kampaku in 996, governed during the height of the Fujiwara clan's power and married four of his daughters to emperors, current and future. The Fujiwara clan held on to power until 1086, when Emperor Shirakawa ceded the throne to his son Emperor Horikawa but continued to exercise political power, establishing the practice of cloistered rule, by which the reigning Emperor would function as a figurehead while the real authority was held by a retired predecessor behind the scenes.
Throughout the Heian period, the power of the imperial court declined. The court became so self-absorbed with power struggles, and with the artistic pursuits of court nobles, that it neglected the administration of government outside the capital. The nationalization of land undertaken as part of the ritsuryō state decayed as various noble families and religious orders succeeded in securing tax-exempt status for their private shōen manors. By the eleventh century, more land in Japan was controlled by shōen owners than by the central government. The imperial court was thus deprived of the tax revenue to pay for its national army. In response, the owners of the shōen set up their own armies of samurai warriors. Two powerful noble families that had descended from branches of the imperial family, the Taira and Minamoto clans, acquired large armies and many shōen outside the capital. The central government began to use these two warrior clans to suppress rebellions and piracy. Japan's population stabilized during the late-Heian period after hundreds of years of decline.
During the early Heian period, the imperial court successfully consolidated its control over the Emishi people of northern Honshu. Ōtomo no Otomaro was the first man the court granted the title of seii tai-shōgun ("Great Barbarian Subduing General"). In 802, seii tai-shōgun Sakanoue no Tamuramaro subjugated the Emishi people, who were led by Aterui. By 1051, members of the Abe clan, who occupied key posts in the regional government, were openly defying the central authority. The court requested the Minamoto clan to engage the Abe clan, whom they defeated in the Former Nine Years War. The court, thus, temporarily reasserted its authority in northern Japan. Following another civil war – the Later Three-Year War – Fujiwara no Kiyohira took full power; his family, the Northern Fujiwara, controlled northern Honshu for the next century from their capital Hiraizumi.
In 1156, a dispute over succession to the throne erupted and the two rival claimants (Emperor Go-Shirakawa and Emperor Sutoku) hired the Taira and Minamoto clans in the hopes of securing the throne by military force. During this war, the Taira clan led by Taira no Kiyomori defeated the Minamoto clan. Kiyomori used his victory to accumulate power for himself in Kyoto and even installed his own grandson Antoku as Emperor. The outcome of this war led to the rivalry between the Minamoto and Taira clans. As a result, the dispute and power struggle between both clans led to the Heiji Rebellion in 1160. In 1180, Taira no Kiyomori was challenged by an uprising led by Minamoto no Yoritomo, a member of the Minamoto clan whom Kiyomori had exiled to Kamakura. Though Taira no Kiyomori died in 1181, the ensuing bloody Genpei War between the Taira and Minamoto families continued for another four years. The victory of the Minamoto clan was sealed in 1185, when a force commanded by Yoritomo's younger brother, Minamoto no Yoshitsune, scored a decisive victory at the naval Battle of Dan-no-ura. Yoritomo and his retainers, thus, became the de facto rulers of Japan.
During the Heian period, the imperial court was a vibrant center of high art and culture. Its literary accomplishments include the poetry collection Kokinshū and the Tosa Diary, both associated with the poet Ki no Tsurayuki, as well as Sei Shōnagon's collection of miscellany The Pillow Book, and Murasaki Shikibu's Tale of Genji, often considered the masterpiece of Japanese literature.
The development of the kana written syllabaries was part of a general trend of declining Chinese influence during the Heian period. The official Japanese missions to Tang dynasty of China, which began in the year 630, ended during the ninth century, though informal missions of monks and scholars continued, and thereafter the development of native Japanese forms of art and poetry accelerated. A major architectural achievement, apart from Heian-kyō itself, was the temple of Byōdō-in built in 1053 in Uji. Many other vast temple complexes were also developed and expanded, such as those on Mount Hiei.
Kamakura period (1185–1333)
Upon the consolidation of power, Minamoto no Yoritomo chose to rule in concert with the Imperial Court in Kyoto. Though Yoritomo set up his own government in Kamakura in the Kantō region located in eastern Japan, its power was legally authorized by the Imperial court in Kyoto in several occasions. In 1192, the Emperor declared Yoritomo seii tai-shōgun (征夷大将軍; Eastern Barbarian Subduing Great General), abbreviated shōgun. Later (in Edo period), the word bakufu (幕府; originally means a general's house or office, literally a "tent office") came to be used to mean a government headed by a shogun. The English term shogunate refers to the bakufu. Japan remained largely under military rule until 1868.
Legitimacy was conferred on the shogunate by the Imperial court, but the shogunate was the de facto rulers of the country. The court maintained bureaucratic and religious functions, and the shogunate welcomed participation by members of the aristocratic class. The older institutions remained intact in a weakened form, and Kyoto remained the official capital. This system has been contrasted with the "simple warrior rule" of the later Muromachi period.
While the Ise branch of the Taira, which had fought against Yoritomo, was extinguished, other branches, as well as the Hōjō, Chiba, Hatakeyama and other families descended from the Taira, continued to thrive in eastern Japan, with some (notably the Hōjō) attaining high positions in the Kamakura shogunate.[better source needed] Yoshitsune was initially harbored by Fujiwara no Hidehira, the grandson of Kiyohira and the de facto ruler of northern Honshu. In 1189, after Hidehira's death, his successor Yasuhira attempted to curry favor with Yoritomo by attacking Yoshitsune's home. Although Yoshitsune was killed, Yoritomo still invaded and conquered the Northern Fujiwara clan's territories. In subsequent centuries, Yoshitsune would become a legendary figure, portrayed in countless works of literature as an idealized tragic hero.
After Yoritomo's death in 1199, the office of shogun weakened. Behind the scenes, Yoritomo's wife Hōjō Masako became the true power behind the government. In 1203, her father, Hōjō Tokimasa, was appointed regent to the shogun, Yoritomo's son Minamoto no Sanetomo. Henceforth, the Minamoto shoguns became puppets of the Hōjō regents, who wielded actual power.
The regime that Yoritomo had established, and which was kept in place by his successors, was decentralized and feudalistic in structure, in contrast with the earlier ritsuryō state. Yoritomo selected the provincial governors, known under the titles of shugo or jitō, from among his close vassals, the gokenin. The Kamakura shogunate allowed its vassals to maintain their own armies and to administer law and order in their provinces on their own terms.
In 1221, the retired Emperor Go-Toba instigated what became known as the Jōkyū War, a rebellion against the shogunate, in an attempt to restore political power to the court. The rebellion was a failure and led to Go-Toba himself being exiled to Oki Island, along with two other Emperors, the retired Emperor Tsuchimikado and Emperor Juntoku, who were exiled to Tosa Province and Sado Island respectively. The shogunate further consolidated its political power relative to the Kyoto aristocracy.
The samurai armies of the whole nation were mobilized in 1274 and 1281 to confront two full-scale invasions launched by Kublai Khan of the Mongol Empire. Though outnumbered by an enemy equipped with superior weaponry, the Japanese fought the Mongols to a standstill in Kyushu on both occasions until the Mongol fleet was destroyed by typhoons called kamikaze, meaning "divine wind". In spite of the Kamakura shogunate's victory, the defense so depleted its finances that it was unable to provide compensation to its vassals for their role in the victory. This had permanent negative consequences for the shogunate's relations with the samurai class.
Discontent among the samurai proved decisive in ending the Kamakura shogunate. In 1333, Emperor Go-Daigo launched a rebellion in the hope of restoring full power to the imperial court. The shogunate sent General Ashikaga Takauji to quell the revolt, but Takauji and his men instead joined forces with Emperor Go-Daigo and overthrew the Kamakura shogunate.
Japan nevertheless entered a period of prosperity and population growth starting around 1250. In rural areas, the greater use of iron tools and fertilizer, improved irrigation techniques, and double-cropping increased productivity and rural villages grew. Fewer famines and epidemics allowed cities to grow and commerce to boom. Buddhism, which had been largely a religion of the elites, was brought to the masses by prominent monks, such as Hōnen (1133–1212), who established Pure Land Buddhism in Japan, and Nichiren (1222–1282), who founded Nichiren Buddhism. Zen Buddhism spread widely among the samurai class.
Literary developments of the late-Heian and Kamakura periods
Waka poetry flourished in the late Heian and early Kamakura periods.
The aristocrat Fujiwara no Shunzei was "the leading poet of [his] day" and on a request from Emperor Go-Shirakawa compiled the Senzai Wakashū the seventh imperial collection. Donald Keene noted that Shunzei was "the most eminent poet since Tsurayuki to have been charged with the compilation of an imperial collection". The anthology, commissioned in 1183 but not completed until 1188, after the defeat of the Taira, contained poems by Taira adherents who had been officially denounced as enemies of the throne, as a gesture to calm the vengeful spirits of the Taira. It also contained poems by thirty-three female poets, the most women recognized by any of the late-Heian imperial collections. Teika, Shunzei's son, would become even more important: his Hyakunin Isshu made him "the arbiter of the poetic tastes of most Japanese even as late as the twentieth century". His later work copying manuscripts were of such importance that Keene noted that "what we know of the literature of Teika's day and earlier is mainly what he thought was worthy of preservation." He also served on the committee that compiled the eighth imperial anthology, the Shin Kokin Wakashū, and along with the itinerant monk Saigyō and Emperor Go-Toba, is considered one of the best poets represented in the collection. More poems by Saigyō were included in the collection than those of any other poet. and centuries later Matsuo Bashō selected him as the representative poet of the waka genre.
The third shogun, Minamoto no Sanetomo, was the first distinctive new poet of the Kamakura period, and he studied the art under Teika's tutelage. Among Sanetomo's admirers include Kamo no Mabuchi and Saitō Mokichi.
Zen monks were associated with the composition of poetry in Chinese, and at least one Zen monk, Shōtetsu, was notable for his contributions to the waka medium. After Shōtetsu, however, waka composition became an oddity until modern times.
The Kamakura period saw an explosion in the popularity of a new genre: the "war tale" (gunki monogatari), whose early representative works include the Hōgen Monogatari, Heiji Monogatari and Heike Monogatari. The latter work, which recounted the rise and fall of the Taira clan, has been described as "the Japanese epic", and the twentieth-century novelist and essayist Kafū Nagai called it "a unique and immortal Japanese épopée." These works were at least partly indebted to earlier Heian works such as the Shōmonki (ja:将門記) and Mutsu Waki (ja:陸奥話記), bare historical chronicles of battles fought against Taira no Masakado and the Earlier Nine Years' War, narrated in a non-literary style of classical Chinese as opposed to the mixed Sino-Japanese vernacular of the later Kamakura works.
Muromachi period (1333–1568)
Takauji and many other samurai soon became dissatisfied with Emperor Go-Daigo's Kenmu Restoration, an ambitious attempt to monopolize power in the imperial court. Takauji rebelled after Go-Daigo refused to appoint him shōgun. In 1338, Takauji captured Kyoto and installed a rival member of the imperial family to the throne, Emperor Kōmyō, who did appoint him shogun. Go-Daigo responded by fleeing to the southern city of Yoshino, where he set up a rival government. This ushered in a prolonged period of conflict between the Northern Court and the Southern Court.
Takauji set up his shogunate in the Muromachi district of Kyoto. However, the shogunate was faced with the twin challenges of fighting the Southern Court and of maintaining its authority over its own subordinate governors. Like the Kamakura shogunate, the Muromachi shogunate appointed its allies to rule in the provinces, but these men increasingly styled themselves as feudal lords—called daimyōs—of their domains and often refused to obey the shogun. The Ashikaga shogun who was most successful at bringing the country together was Takauji's grandson Ashikaga Yoshimitsu, who came to power in 1368 and remained influential until his death in 1408. Yoshimitsu expanded the power of the shogunate and in 1392, brokered a deal to bring the Northern and Southern Courts together and end the civil war. Henceforth, the shogunate kept the Emperor and his court under tight control.
The Kinkaku-ji or "Temple of the Golden Pavilion" was built by the shōgun Ashikaga Yoshimitsu in 1397 CE. The site was originally a villa called Kitayama-dai of the powerful statesman Saionji Kintsune. Ashikaga Yoshimitsu purchased it from the Saionji family and transformed it into Kinkaku-ji. When Yoshimitsu died the building was converted into a Zen temple by his son, according to his wishes.
During the final century of the Ashikaga shogunate the country descended into another, more violent period of civil war. This started in 1467 when the Ōnin War broke out over who would succeed the ruling shogun. The daimyōs each took sides and burned Kyoto to the ground while battling for their preferred candidate. By the time the succession was settled in 1477, the shogun had lost all power over the daimyō, who now ruled hundreds of independent states throughout Japan. During this Warring States period, daimyōs fought among themselves for control of the country. Some of the most powerful daimyōs of the era were Uesugi Kenshin, Takeda Shingen, and Date Masamune. One enduring symbol of this era was the ninja, skilled spies and assassins hired by daimyōs. Few definite historical facts are known about the secretive lifestyles of the ninja, who became the subject of many legends. In addition to the daimyōs, rebellious peasants and "warrior monks" affiliated with Buddhist temples also raised their own armies.
Amid this on-going anarchy, a Chinese ship was blown off course and landed in 1543 on the Japanese island of Tanegashima, just south of Kyushu. The three Portuguese traders on board were António Mota, Francisco Zeimoto, and presumably Fernão Mendes Pinto. They were the first Europeans to set foot in Japan. Soon European traders would introduce many new items to Japan, most importantly the musket. By 1556, the daimyōs were already using about 300,000 muskets in their armies. The Europeans also brought Christianity, which soon came to have a substantial following in Japan. The Jesuit missionary Francis Xavier disembarked in Kyushu in 1549.
In spite of the war, Japan's relative economic prosperity, which had begun in the Kamakura period, continued well into the Muromachi period. By 1450 Japan's population stood at ten million, compared to six million at the end of the thirteenth century. Commerce flourished, including considerable trade with China and Korea. Because the daimyōs and other groups within Japan were minting their own coins, Japan began to transition from a barter-based to a currency-based economy. During the period, some of Japan's most representative art forms developed, including ink wash painting, ikebana flower arrangement, the tea ceremony, Japanese gardening, bonsai, and Noh theater. Though the eighth Ashikaga shogun, Yoshimasa, was an ineffectual political and military leader, he played a critical role in promoting these cultural developments.
Azuchi–Momoyama period (1568–1600)
During the second half of the 16th century, Japan gradually reunified under two powerful warlords: Oda Nobunaga; and Toyotomi Hideyoshi. The period takes its name from Nobunaga's headquarters, Azuchi Castle, and Hideyoshi's headquarters, Momoyama Castle.
Nobunaga was the daimyō of the small province of Owari. He burst onto the scene suddenly, in 1560, when, during the Battle of Okehazama, his army defeated a force several times its size led by the powerful daimyō Imagawa Yoshimoto. Nobunaga was renowned for his strategic leadership and his ruthlessness. He encouraged Christianity to incite hatred toward his Buddhist enemies and to forge strong relationships with European arms merchants. He equipped his armies with muskets and trained them with innovative tactics. He promoted talented men regardless of their social status, including his peasant servant Toyotomi Hideyoshi, who became one of his best generals.
The Azuchi–Momoyama period began in 1568, when Nobunaga seized Kyoto and thus effectively brought an end to the Ashikaga shogunate. He was well on his way towards his goal of reuniting all Japan in 1582 when one of his own officers, Akechi Mitsuhide, killed him during an abrupt attack on his encampment. Hideyoshi avenged Nobunaga by crushing Akechi's uprising and emerged as Nobunaga's successor. Hideyoshi completed the reunification of Japan by conquering Shikoku, Kyushu, and the lands of the Hōjō family in eastern Japan. He launched sweeping changes to Japanese society, including the confiscation of swords from the peasantry, new restrictions on daimyōs, persecutions of Christians, a thorough land survey, and a new law effectively forbidding the peasants and samurai from changing their social class. Hideyoshi's land survey designated all those who were cultivating the land as being "commoners", an act which effectively granted freedom to most of Japan's slaves.
As Hideyoshi's power expanded he dreamed of conquering China and launched two massive invasions of Korea starting in 1592. Hideyoshi failed to defeat the Chinese and Korean armies on the Korean Peninsula and the war ended only after his death in 1598.
In the hope of founding a new dynasty, Hideyoshi had asked his most trusted subordinates to pledge loyalty to his infant son Toyotomi Hideyori. Despite this, almost immediately after Hideyoshi's death, war broke out between Hideyori's allies and those loyal to Tokugawa Ieyasu, a daimyō and a former ally of Hideyoshi. Tokugawa Ieyasu won a decisive victory at the Battle of Sekigahara in 1600, ushering in 268 uninterrupted years of rule by the Tokugawa clan.
Early modern Japan
Edo period (1600–1868)
The Edo period was characterized by relative peace and stability under the tight control of the Tokugawa shogunate, which ruled from the eastern city of Edo (modern Tokyo). In 1603, Emperor Go-Yōzei declared Tokugawa Ieyasu shōgun, and Ieyasu abdicated two years later to groom his son as the second shōgun of what became a long dynasty. Nevertheless, it took time for the Tokugawas to consolidate their rule. In 1609, the shōgun gave the daimyō of Satsuma Domain permission to invade the Ryukyu Kingdom for perceived insults towards the shogunate; the Satsuma victory began 266 years of Ryukyu's dual subordination to Satsuma and China. Ieyasu led the Siege of Osaka that ended with the destruction of the Toyotomi clan in 1615. Soon after the shogunate promulgated the Laws for the Military Houses, which imposed tighter controls on the daimyōs, and the alternate attendance system, which required each daimyō to spend every other year in Edo. Even so, the daimyōs continued to maintain a significant degree of autonomy in their domains. The central government of the shogunate in Edo, which quickly became the most populous city in the world, took counsel from a group of senior advisors known as rōjū and employed samurai as bureaucrats. The Emperor in Kyoto was funded lavishly by the government but was allowed no political power.
The Tokugawa shogunate went to great lengths to suppress social unrest. Harsh penalties, including crucifixion, beheading, and death by boiling, were decreed for even the most minor offenses, though criminals of high social class were often given the option of seppuku ("self-disembowelment"), an ancient form of suicide that now became ritualized. Christianity, which was seen as a potential threat, was gradually clamped down on until finally, after the Christian-led Shimabara Rebellion of 1638, the religion was completely outlawed. To prevent further foreign ideas from sowing dissent, the third Tokugawa shogun, Iemitsu, implemented the sakoku ("closed country") isolationist policy under which Japanese people were not allowed to travel abroad, return from overseas, or build ocean-going vessels. The only Europeans allowed on Japanese soil were the Dutch, who were granted a single trading post on the island of Dejima. China and Korea were the only other countries permitted to trade, and many foreign books were banned from import.
During the first century of Tokugawa rule, Japan's population doubled to thirty million, due in large part to agricultural growth; the population remained stable for the rest of the period. The shogunate's construction of roads, elimination of road and bridge tolls, and standardization of coinage promoted commercial expansion that also benefited the merchants and artisans of the cities. City populations grew, but almost ninety percent of the population continued to live in rural areas. Both the inhabitants of cities and of rural communities would benefit from one of the most notable social changes of the Edo period: increased literacy and numeracy. The number of private schools greatly expanded, particularly those attached to temples and shrines, and raised literacy to thirty percent. This may have been the world's highest rate at the time and drove a flourishing commercial publishing industry, which grew to produce hundreds of titles per year. In the area of numeracy – approximated by an index measuring people's ability to report an exact rather than a rounded age (age-heaping method), and which level shows a strong correlation to later economic development of a country – Japan's level was comparable to that of north-west European countries, and moreover, Japan's index came close to the 100 percent mark throughout the nineteenth century. These high levels of both literacy and numeracy were part of the socio-economical foundation for Japan's strong growth rates during the following century.
Culture and philosophy
The Edo period was a time of prolific cultural output, as the merchant classes grew in wealth and began spending their income on cultural and social pursuits too expensive for commoners and too inappropriate for the upper and ruling class.
Forms of theatre such as kabuki and bunraku puppet theatre became widely popular. These new forms of entertainment were (at the time) accompanied by short songs (kouta) and music played on the shamisen, a new import to Japan in 1600. Haiku, whose greatest master is generally agreed to be Matsuo Bashō (1644-1694), also rose as a major form of poetry.
Kabuki was still a relatively new form of theatre - cheap, flamboyant, and more accessible than traditional Noh theatre. Geisha - a new profession of entertainers - also became popular, cheaper and more accessible than high-ranking courtesans, who would sometimes require many expensive visits before sleeping with a customer.
Geisha would provide conversation at parties, sing kouta and play the shamisen, and would dance and play drinking games with customers, though they would not sleep with them. Geisha soon became more popular than courtesans, and faced a number of clamp-downs throughout the Edo period; a geisha accused of stealing a courtesan's customers could be fully investigated, and the profession faced dress edicts to set them apart from courtesans.
Both forms of theatre and the shamisen were considered slightly lower-class, as they spoke openly of pursuits improper for the upper classes, talked of geisha and the pleasure quarters, and were relatively frank in their descriptions of forbidden love, heartache and romance. Both kouta and haiku could be composed off-the-cuff, and many were whimsical in nature, describing bittersweet emotions and torrid romances.
Members of the wealthy merchant class who patronized these forms of entertainment were said to live hedonistic lives, which came to be called the ukiyo ("floating world"). This lifestyle inspired ukiyo-zōshi popular novels and ukiyo-e art, the latter of which were often woodblock prints that progressed to greater sophistication and use of multiple printed colors.
Decline and fall of the shogunate
By the late eighteenth and early nineteenth centuries, the shogunate showed signs of weakening. The dramatic growth of agriculture that had characterized the early Edo period had ended and the government handled the devastating Tenpō famines poorly. Peasant unrest grew and government revenues fell. The shogunate cut the pay of the already financially distressed samurai, many of whom worked side jobs to make a living. Discontented samurai were soon to play a major role in engineering the downfall of the Tokugawa shogunate.
At the same time, the people drew inspiration from new ideas and fields of study. Dutch books brought into Japan stimulated interest in Western learning, called rangaku or "Dutch learning". The physician Sugita Genpaku, for instance, used concepts from Western medicine to help spark a revolution in Japanese ideas of human anatomy. The scholarly field of kokugaku or "National Learning", developed by scholars such as Motoori Norinaga and Hirata Atsutane, promoted what it asserted were native Japanese values. For instance, it criticized the Chinese-style Neo-Confucianism advocated by the shogunate and emphasized the Emperor's divine authority, which the Shinto faith taught had its roots in Japan's mythic past, which was referred to as the "Age of the Gods".
The arrival in 1853 of a fleet of American ships commanded by Commodore Matthew C. Perry threw Japan into turmoil. The US government aimed to end Japan's isolationist policies. The shogunate had no defense against Perry's gunboats and had to agree to his demands that American ships be permitted to acquire provisions and trade at Japanese ports. The US, Great Britain, Russia, and other Western powers imposed what became known as "unequal treaties" on Japan which stipulated that Japan must allow citizens of these countries to visit or reside on Japanese territory and must not levy tariffs on their imports or try them in Japanese courts.
The shogunate's failure to oppose the Western powers angered many Japanese, particularly those of the southern domains of Chōshū and Satsuma. Many samurai there, inspired by the nationalist doctrines of the kokugaku school, adopted the slogan of sonnō jōi ("revere the Emperor, expel the barbarians"). The two domains then went on to form an alliance. In August 1866, soon after becoming shogun, Tokugawa Yoshinobu, struggled to maintain power as civil unrest continued. In November 1867, Yoshinobu officially tendered his resignation to the Emperor and he formally stepped down ten days later. The Chōshū and Satsuma domains in 1868 convinced the young Emperor Meiji and his advisors to issue a rescript calling for an end to the Tokugawa shogunate. The armies of Chōshū and Satsuma soon marched on Edo and the ensuing Boshin War led to the eventual fall of the shogunate.
Meiji period (1868–1912)
The Emperor was restored to nominal supreme power, and in 1869, the imperial family moved to Edo, which was renamed Tokyo ("eastern capital"). However, the most powerful men in the government were former samurai from Chōshū and Satsuma rather than the Emperor, who was fifteen in 1868. These men, known as the Meiji oligarchs, oversaw the dramatic changes Japan would experience during this period. The leaders of the Meiji government, who are regarded as some of the most successful statesmen in human history, desired Japan to become a modern nation-state that could stand equal to the Western imperialist powers. Among them were Ōkubo Toshimichi and Saigō Takamori from Satsuma, as well as Kido Takayoshi, Ito Hirobumi, and Yamagata Aritomo from Chōshū.
The Meiji government abolished the Neo-Confucian class structure and replaced the feudal domains of the daimyōs with prefectures. It instituted comprehensive tax reform and lifted the ban on Christianity. Major government priorities included the introduction of railways, telegraph lines, and a universal education system.
The Meiji government promoted widespread Westernization and hired hundreds of advisers from Western nations with expertise in such fields as education, mining, banking, law, military affairs, and transportation to remodel Japan's institutions. The Japanese adopted the Gregorian calendar, Western clothing, and Western hairstyles. One leading advocate of Westernization was the popular writer Fukuzawa Yukichi. As part of its Westernization drive, the Meiji government enthusiastically sponsored the importation of Western science, above all medical science. In 1893, Kitasato Shibasaburō established the Institute for Infectious Diseases, which would soon become world-famous, and in 1913, Hideyo Noguchi proved the link between syphilis and paresis. Furthermore, the introduction of European literary styles to Japan sparked a boom in new works of prose fiction. Characteristic authors of the period included Futabatei Shimei and Mori Ōgai, although the most famous of the Meiji era writers was Natsume Sōseki, who wrote satirical, autobiographical, and psychological novels combining both the older and newer styles. Ichiyō Higuchi, a leading female author, took inspiration from earlier literary models of the Edo period.
Government institutions developed rapidly in response to the Freedom and People's Rights Movement, a grassroots campaign demanding greater popular participation in politics. The leaders of this movement included Itagaki Taisuke and Ōkuma Shigenobu. Itō Hirobumi, the first Prime Minister of Japan, responded by writing the Meiji Constitution, which was promulgated in 1889. The new constitution established an elected lower house, the House of Representatives, but its powers were restricted. Only two percent of the population were eligible to vote, and legislation proposed in the House required the support of the unelected upper house, the House of Peers. Both the cabinet of Japan and the Japanese military were directly responsible not to the elected legislature but to the Emperor. Concurrently, the Japanese government also developed a form of Japanese nationalism under which Shinto became the state religion and the Emperor was declared a living god. Schools nationwide instilled patriotic values and loyalty to the Emperor.
Rise of imperialism and the military
In December 1871, a Ryukyuan ship was shipwrecked on Taiwan and the crew were massacred. In 1874, using the incident as a pretext, Japan launched a military expedition to Taiwan to assert their claims to the Ryukyu Islands. The expedition featured the first instance of the Japanese military ignoring the orders of the civilian government, as the expedition set sail after being ordered to postpone.
Yamagata Aritomo, who was born a samurai in the Chōshū Domain, was a key force behind the modernization and enlargement of the Imperial Japanese Army, especially the introduction of national conscription. The new army was put to use in 1877 to crush the Satsuma Rebellion of discontented samurai in southern Japan led by the former Meiji leader Saigo Takamori.
The Japanese military played a key role in Japan's expansion abroad. The government believed that Japan had to acquire its own colonies to compete with the Western colonial powers. After consolidating its control over Hokkaido and annexing the Ryukyu Kingdom, it next turned its attention to China and Korea. In 1894, Japanese and Chinese troops clashed in Korea, where they were both stationed to suppress the Donghak Rebellion. During the ensuing First Sino-Japanese War, Japan's highly motivated and well-led forces defeated the more numerous and better-equipped military of Qing China. The island of Taiwan was thus ceded to Japan in 1895, and Japan's government gained enough international prestige to allow Foreign Minister Mutsu Munemitsu to renegotiate the "unequal treaties". In 1902 Japan signed an important military alliance with the British.
Japan next clashed with Russia, which was expanding its power in Asia. The Russo-Japanese War of 1904–05 ended with the dramatic Battle of Tsushima, which was another victory for Japan's military. Japan thus laid claim to Korea as a protectorate in 1905, followed by full annexation in 1910.
Economic modernization and labor unrest
During the Meiji period, Japan underwent a rapid transition towards an industrial economy. Both the Japanese government and private entrepreneurs adopted Western technology and knowledge to create factories capable of producing a wide range of goods. By the end of the period, the majority of Japan's exports were manufactured goods. Some of Japan's most successful new businesses and industries constituted huge family-owned conglomerates called zaibatsu, such as Mitsubishi and Sumitomo. The phenomenal industrial growth sparked rapid urbanization. The proportion of the population working in agriculture shrank from 75 percent in 1872 to 50 percent by 1920.
Japan enjoyed solid economic growth at this time and most people lived longer and healthier lives. The population rose from 34 million in 1872 to 52 million in 1915. Poor working conditions in factories led to growing labor unrest, and many workers and intellectuals came to embrace socialist ideas. The Meiji government responded with harsh suppression of dissent. Radical socialists plotted to assassinate the Emperor in the High Treason Incident of 1910, after which the Tokkō secret police force was established to root out left-wing agitators. The government also introduced social legislation in 1911 setting maximum work hours and a minimum age for employment.
Communications became a high priority for businesses, government agencies and upscale consumers. They quickly adopted the telegraph. the telephone, phonograph, and radio.
Taishō period (1912–1926)
During the short reign of Emperor Taishō, Japan developed stronger democratic institutions and grew in international power. The Taishō political crisis opened the period with mass protests and riots organized by Japanese political parties, which succeeded in forcing Katsura Tarō to resign as prime minister. This and the rice riots of 1918 increased the power of Japan's political parties over the ruling oligarchy. The Seiyūkai and Minseitō parties came to dominate politics by the end of the so-called "Taishō democracy" era. The franchise for the House of Representatives had been gradually expanded since 1890, and in 1925 universal male suffrage was introduced. However, in the same year the far-reaching Peace Preservation Law also passed, prescribing harsh penalties for political dissidents.
Japan's participation in World War I on the side of the Allies sparked unprecedented economic growth and earned Japan new colonies in the South Pacific seized from Germany. After the war Japan signed the Treaty of Versailles and enjoyed good international relations through its membership in the League of Nations and participation in international disarmament conferences. The Great Kantō earthquake in September 1923 left over 100,000 dead, and combined with the resultant fires destroyed the homes of more than three million.
The growth of popular prose fiction, which began during the Meiji period, continued into the Taishō period as literacy rates rose and book prices dropped. Notable literary figures of the era included short story writer Ryūnosuke Akutagawa and the novelist Haruo Satō. Jun'ichirō Tanizaki, described as "perhaps the most versatile literary figure of his day" by the historian Conrad Totman, produced many works during the Taishō period influenced by European literature, though his 1929 novel Some Prefer Nettles reflects deep appreciation for the virtues of traditional Japanese culture. At the end of the Taishō period, Tarō Hirai, known by his penname Edogawa Ranpo, began writing popular mystery and crime stories.
Shōwa period (1926–1989)
Emperor Hirohito's sixty-three-year reign from 1926 to 1989 is the longest in recorded Japanese history. The first twenty years were characterized by the rise of extreme nationalism and a series of expansionist wars. After suffering defeat in World War II, Japan was occupied by foreign powers for the first time in its history, and then re-emerged as a major world economic power.
Manchurian Incident and the Second Sino-Japanese War
Left-wing groups had been subject to violent suppression by the end of the Taishō period, and radical right-wing groups, inspired by fascism and Japanese nationalism, rapidly grew in popularity. The extreme right became influential throughout the Japanese government and society, notably within the Kwantung Army, a Japanese army stationed in China along the Japanese-owned South Manchuria Railroad. During the Manchurian Incident of 1931, radical army officers bombed a small portion of the South Manchuria Railroad and, falsely attributing the attack to the Chinese, invaded Manchuria. The Kwantung Army conquered Manchuria and set up the puppet government of Manchukuo there without permission from the Japanese government. International criticism of Japan following the invasion led to Japan withdrawing from the League of Nations.
Prime Minister Tsuyoshi Inukai of the Seiyūkai Party attempted to restrain the Kwantung Army and was assassinated in 1932 by right-wing extremists. Because of growing opposition within the Japanese military and the extreme right to party politicians, who they saw as corrupt and self-serving, Inukai was the last party politician to govern Japan in the pre-World War II era. In February 1936 young radical officers of the Imperial Japanese Army attempted a coup d'état. They assassinated many moderate politicians before the coup was suppressed. In its wake the Japanese military consolidated its control over the political system and most political parties were abolished when the Imperial Rule Assistance Association was founded in 1940.
Japan's expansionist vision grew increasingly bold. Many of Japan's political elite aspired to have Japan acquire new territory for resource extraction and settlement of surplus population. These ambitions led to the outbreak of the Second Sino-Japanese War in 1937. After their victory in the Chinese capital, the Japanese military committed the infamous Nanjing Massacre. The Japanese military failed to defeat the Chinese government led by Chiang Kai-shek and the war descended into a bloody stalemate that lasted until 1945. Japan's stated war aim was to establish the Greater East Asia Co-Prosperity Sphere, a vast pan-Asian union under Japanese domination. Hirohito's role in Japan's foreign wars remains a subject of controversy, with various historians portraying him as either a powerless figurehead or an enabler and supporter of Japanese militarism.
The United States opposed Japan's invasion of China and responded with increasingly stringent economic sanctions intended to deprive Japan of the resources to continue its war in China. Japan reacted by forging an alliance with Germany and Italy in 1940, known as the Tripartite Pact, which worsened its relations with the US. In July 1941, the United States, the United Kingdom, and the Netherlands froze all Japanese assets when Japan completed its invasion of French Indochina by occupying the southern half of the country, further increasing tension in the Pacific.
World War II
In late 1941, Japan's government, led by Prime Minister and General Hideki Tojo, decided to break the US-led embargo through force of arms. On December 7, 1941, the Imperial Japanese Navy launched a surprise attack on the American fleet at Pearl Harbor, Hawaii. This brought the US into World War II on the side of the Allies. Japan then successfully invaded the Asian colonies of the United States, the United Kingdom, and the Netherlands, including the Philippines, Malaya, Hong Kong, Singapore, Burma, and the Dutch East Indies.
In the early stages of the war, Japan scored victory after victory. The tide began to turn against Japan following the Battle of Midway in June 1942 and the subsequent Battle of Guadalcanal, in which Allied troops wrested the Solomon Islands from Japanese control. During this period the Japanese military was responsible for such war crimes as mistreatment of prisoners of war, massacres of civilians, and the use of chemical and biological weapons. The Japanese military earned a reputation for fanaticism, often employing banzai charges and fighting almost to the last man against overwhelming odds. In 1944 the Imperial Japanese Navy began deploying squadrons of kamikaze pilots who crashed their planes into enemy ships.
Life in Japan became increasingly difficult for civilians due to stringent rationing of food, electrical outages, and a brutal crackdown on dissent. In 1944 the US Army captured the island of Saipan, which allowed the United States to begin widespread bombing raids on the Japanese mainland. These destroyed over half of the total area of Japan's major cities. The Battle of Okinawa, fought between April and June 1945, was the largest naval operation of the war and left 77,166 Japanese soldiers and more than 140,000 Okinawan civilians dead, suggesting that the planned invasion of mainland Japan would be even bloodier. The Japanese superbattleship Yamato was sunk en route to aid in the Battle of Okinawa.
However, on August 6, 1945, the US dropped an atomic bomb over Hiroshima, killing over 70,000 people. This was the first nuclear attack in history. On August 9 the Soviet Union declared war on Japan and invaded Manchukuo and other territories, and Nagasaki was struck by a second atomic bomb, killing around 40,000 people. The unconditional surrender of Japan was announced by Emperor Hirohito and communicated to the Allies on August 14, and broadcast on national radio on the following day, marking the end of Imperial Japan's ultranationalist ideology, and was a major turning point in Japanese history.
Occupation of Japan
Japan experienced dramatic political and social transformation under the Allied occupation in 1945–1952. US General Douglas MacArthur, the Supreme Commander of Allied Powers, served as Japan's de facto leader and played a central role in implementing reforms, many inspired by the New Deal of the 1930s.
The occupation sought to decentralize power in Japan by breaking up the zaibatsu, transferring ownership of agricultural land from landlords to tenant farmers, and promoting labor unionism. Other major goals were the demilitarization and democratization of Japan's government and society. Japan's military was disarmed, its colonies were granted independence, the Peace Preservation Law and Tokkō were abolished, and the International Military Tribunal of the Far East tried war criminals. The cabinet became responsible not to the Emperor but to the elected National Diet. The Emperor was permitted to remain on the throne, but was ordered to renounce his claims to divinity, which had been a pillar of the State Shinto system. Japan's new constitution came into effect in 1947 and guaranteed civil liberties, labor rights, and women's suffrage, and through Article 9, Japan renounced its right to go to war with another nation.
The San Francisco Peace Treaty of 1951 officially normalized relations between Japan and the United States. The occupation ended in 1952, although the US continued to administer a number of the Ryukyu Islands. In 1968, the Ogasawara Islands were returned from US occupation to Japanese sovereignty. Japanese citizens were allowed to return. Okinawa was the last to be returned in 1972. The US continues to operate military bases throughout the Ryukyu Islands, mostly on Okinawa, as part of the US-Japan Security Treaty.
Postwar growth and prosperity
Shigeru Yoshida served as prime minister in 1946–1947 and 1948–1954, and played a key role in guiding Japan through the occupation. His policies, known as the Yoshida Doctrine, proposed that Japan should forge a tight relationship with the United States and focus on developing the economy rather than pursuing a proactive foreign policy. Yoshida was one of the longest serving prime ministers in Japanese history and the third-longest serving Prime Minister in Post-occupation Japan. Yoshida's Liberal Party merged in 1955 into the new Liberal Democratic Party (LDP), which went on to dominate Japanese politics for the remainder of the Shōwa period.
Although the Japanese economy was in bad shape in the immediate postwar years, an austerity program implemented in 1949 by finance expert Joseph Dodge ended inflation. The Korean War (1950–1953) was a major boon to Japanese business. In 1949 the Yoshida cabinet created the Ministry of International Trade and Industry (MITI) with a mission to promote economic growth through close cooperation between the government and big business. MITI sought successfully to promote manufacturing and heavy industry, and encourage exports. The factors behind Japan's postwar economic growth included technology and quality control techniques imported from the West, close economic and defense cooperation with the United States, non-tariff barriers to imports, restrictions on labor unionization, long work hours, and a generally favorable global economic environment. Japanese corporations successfully retained a loyal and experienced workforce through the system of lifetime employment, which assured their employees a safe job.
By 1955, the Japanese economy had grown beyond prewar levels, and by 1968 it had become the second largest in the world. The GNP expanded at an annual rate of nearly 10% from 1956 until the 1973 oil crisis slowed growth to a still-rapid average annual rate of just over 4% until 1991. Life expectancy rose and Japan's population increased to 123 million by 1990. Ordinary Japanese people became wealthy enough to purchase a wide array of consumer goods. During this period, Japan became the world's largest manufacturer of automobiles and a leading producer of electronics. Japan signed the Plaza Accord in 1985 to depreciate the US dollar against the yen and other currencies. By the end of 1987, the Nikkei stock market index had doubled and the Tokyo Stock Exchange became the largest in the world. During the ensuing economic bubble, stock and real-estate loans grew rapidly.
Japan became a member of the United Nations in 1956 and further cemented its international standing in 1964, when it hosted the Olympic Games in Tokyo. Japan was a close ally of the United States during the Cold War, though this alliance did not have unanimous support from the Japanese people. As requested by the United States, Japan reconstituted its army in 1954 under the name Japan Self-Defense Forces (JSDF), though some Japanese insisted that the very existence of the JSDF was a violation of Article 9 of Japan's constitution. In 1960, hundreds of thousands protested against amendments to the US-Japan Security Treaty. Japan successfully normalized relations with the Soviet Union in 1956, despite an ongoing dispute over the ownership of the Kuril Islands, and with South Korea in 1965, despite an ongoing dispute over the ownership of the islands of Liancourt Rocks. In accordance with US policy, Japan recognized the Republic of China on Taiwan as the legitimate government of China after World War II, though Japan switched its recognition to the People's Republic of China in 1972.
Among cultural developments, the immediate post-occupation period became a golden age for Japanese cinema. The reasons for this include the abolition of government censorship, low film production costs, expanded access to new film techniques and technologies, and huge domestic audiences at a time when other forms of recreation were relatively scarce.
Heisei period (1989–2019)
Emperor Akihito's reign began upon the death of his father, Emperor Hirohito. The economic bubble popped in 1989, and stock and land prices plunged as Japan entered a deflationary spiral. Banks found themselves saddled with insurmountable debts that hindered economic recovery. Stagnation worsened as the birthrate declined far below replacement level. The 1990s are often referred to as Japan's Lost Decade. Economic performance was frequently poor in the following decades and the stock market never returned to its pre-1989 highs. Japan's system of lifetime employment largely collapsed and unemployment rates rose. The faltering economy and several corruption scandals weakened the LDP's dominant political position. Japan was nevertheless governed by non-LDP prime ministers only in 1993–1996 and 2009–2012.
Japan's dealing with its war legacy has strained international relations. China and Korea have found official apologies, such as those of the Emperor in 1990 and the Murayama Statement of 1995, inadequate or insincere. Nationalist politics have exacerbated this, such as denial of the Nanjing Massacre and other war crimes; revisionist history textbooks, which have provoked protests in East Asia, and frequent visits by Japanese politicians to Yasukuni Shrine, where convicted war criminals are enshrined. Legislation in 2015 expanding the military's role overseas was criticized as a "war bill".
On March 11, 2011, one of the largest earthquakes recorded in Japan occurred in the northeast. The resulting tsunami damaged the nuclear facilities in Fukushima, which experienced a nuclear meltdown and severe radiation leakage. In the 21st century there have been increasing reports on the prevalence of sexlessness among the Japanese, including its byproducts such as a decreasing population, the increasing popularity of sexbots and the herbivore men phenomenon. Some people have blamed the increasing sexlessness in Japanese society on economic insecurity, and others on anaphrodisic government policies.
Reiwa period (2019–present)
Emperor Naruhito's reign began upon the abdication of his father, Emperor Akihito, on May 1, 2019.
Social stratification in Japan became pronounced during the Yayoi period. Expanding trade and agriculture increased the wealth of society, which was increasingly monopolized by social elites. By 600 AD, a class structure had developed which included court aristocrats, the families of local magnates, commoners, and slaves. Over 90% were commoners, who included farmers, merchants, and artisans. During the late Heian period, the governing elite consisted of three classes. The traditional aristocracy shared power with Buddhist monks and samurai, though the latter became increasingly dominant in the Kamakura and Muromachi periods. These periods witnessed the rise of the merchant class, which diversified into a greater variety of specialized occupations.
Women initially held social and political equality with men, and archaeological evidence suggests a prehistorical preference for female rulers in western Japan. Female Emperors appear in recorded history until the Meiji Constitution declared strict male-only ascension in 1889. Chinese Confucian-style patriarchy was first codified in the 7th–8th centuries with the ritsuryō system, which introduced a patrilineal family register with a male head of household. Women until then had held important roles in government which thereafter gradually diminished, though even in the late Heian period women wielded considerable court influence. Marital customs and many laws governing private property remained gender neutral.
For reasons that are unclear to historians the status of women rapidly deteriorated from the fourteenth century and onwards. Women of all social classes lost the right to own and inherit property and were increasingly viewed as inferior to men. Hideyoshi's land survey of the 1590s further entrenched the status of men as dominant landholders. During the US occupation following World War II , women gained legal equality with men, but faced widespread workplace discrimination. A movement for women's rights led to the passage of an equal employment law in 1986, but by the 1990s women held only 10% of management positions.
The Tokugawa shogunate rigidified long-existent class divisions, placing most of the population into a Neo-Confucian hierarchy of four occupations, with the ruling elite at the top, followed by the peasants who made up 80% of the population, then artisans, and merchants at the bottom. Court nobles, clerics, outcasts, entertainers, and workers of the licensed quarters fell outside this structure. Different legal codes applied to different classes, marriage between classes was prohibited, and towns were subdivided into different class areas. The social stratification had little bearing on economic conditions: many samurai lived in poverty and the wealth of the merchant class grew throughout the period as the commercial economy developed and urbanization grew. The Edo-era social power structure proved untenable and gave way following the Meiji Restoration to one in which commercial power played an increasingly significant political role.
Although all social classes were legally abolished at the start of the Meiji period, income inequality greatly increased. New economic class divisions were formed between capitalist business owners who formed the new middle class, small shopkeepers of the old middle class, the working class in factories, rural landlords, and tenant farmers. The great disparities of income between the classes dissipated during and after World War II, eventually declining to levels that were among the lowest in the industrialized world. Some postwar surveys indicated that up to 90% of Japanese self-identified as being middle class.
Populations of workers in professions considered unclean, such as leatherworkers and those who handled the dead, developed in the 15th and 16th centuries into hereditary outcast communities. These people, later called burakumin, fell outside the Edo-period class structure and suffered discrimination that lasted after the class system was abolished. Though activism has improved the social conditions of those from burakumin backgrounds, discrimination in employment and education lingered into the 21st century.
|Wikivoyage has a travel guide for History of Japan.|
- Bibliography of Japanese history
- Economic history of Japan
- Higashiyama period
- Historiography of Japan
- History of East Asia
- History of Japanese art
- History of Japanese foreign relations
- Australia–Japan relations
- Austria–Japan relations
- Brazil–Japan relations
- Canada–Japan relations
- China–Japan relations
- Foreign relations of Meiji Japan
- France–Japan relations
- Germany–Japan relations
- Greater East Asia Co-Prosperity Sphere, 1930–1945
- Greece–Japan relations
- History of Japan–Korea relations
- History of Sino-Japanese relations, China
- India–Japan relations
- Ireland–Japan relations
- Israel–Japan relations
- Italy–Japan relations
- Japanese foreign policy on Southeast Asia
- Japan–Mexico relations
- Japan–Netherlands relations
- Japan–Portugal relations
- Japan–Russia relations
- Japan–South Korea relations
- Japan–Soviet Union relations
- Japan–Spain relations
- Japan–Turkey relations
- Japan–United Kingdom relations
- Japan–United States relations
- History of manga
- History of Tokyo
- List of Emperors of Japan
- List of Prime Ministers of Japan
- List of World Heritage Sites in Japan
- Military history of Japan
- Politics of Japan
- Timeline of Japanese history
- Bulletin of the National Museum of Japanese History, in Japanese
- Japanese Journal of Religious Studies
- Journal of Japanese Studies
- Monumenta Nipponica, Japanese studies, in English
- Social Science Japan Journal
- The dates of the Asuka period are not widely agreed upon, with some historians, particularly art historians, dividing the period 538–710 into two or more periods. Others take a later start date for the Asuka period, for example starting it in 592 with the accession of Empress Suiko.
- Fujita, Masaki (2016). "Advanced maritime adaptation in the western Pacific coastal region extends back to 35,000–30,000 years before present". Proceedings of the National Academy of Sciences of the United States of America. 113 (40): 11184–11189. doi:10.1073/pnas.1607857113. PMC 5056111. PMID 27638208.
- "Water Supply in Japan". Ministry of Health, Labour and Welfare. Archived from the original (website) on January 26, 2018. Retrieved September 26, 2018.
- "平成29年全国都道府県市区町村別の面積を公表" (in Japanese). 国土地理院 (Geospatial Information Authority of Japan). Archived from the original on September 19, 2018. Retrieved September 19, 2018.
- "日本の領海等概念図". 海上保安庁海洋情報部. Archived from the original on August 12, 2018. Retrieved August 12, 2018.
- Barnes, Gina L. (2003). "Origins of the Japanese Islands: The New "Big Picture"" (PDF). University of Durham. Archived from the original (PDF) on April 28, 2011. Retrieved August 11, 2009.
- Sella, Giovanni F.; Dixon, Timothy H.; Mao, Ailin (2002). "REVEL: A model for Recent plate velocities from space geodesy". Journal of Geophysical Research: Solid Earth. 107 (B4): ETG 11–1–ETG 11–30. Bibcode:2002JGRB..107.2081S. doi:10.1029/2000jb000033. ISSN 0148-0227.
- "Tectonics and Volcanoes of Japan". Oregon State University. Archived from the original on February 4, 2007. Retrieved March 27, 2007.
- "Forest area". The World Bank Group. Retrieved October 11, 2015.
- "地形分類" (PDF). Geospatial Information Authority of Japan. Retrieved October 14, 2015.
- ""World Population prospects – Population division"". population.un.org. United Nations Department of Economic and Social Affairs, Population Division. Retrieved November 9, 2019.
- ""Overall total population" – World Population Prospects: The 2019 Revision" (xslx). population.un.org (custom data acquired via website). United Nations Department of Economic and Social Affairs, Population Division. Retrieved November 9, 2019.
- Schirokauer 2013, pp. 128–130.
- Sanz, 157–159.
- Tsutsumi Takashi (January 18, 2012). "MIS3 edge-ground axes and the arrival of the first Homo sapiens in the Japanese archipelago". Quaternary International. 248: 70–78. Bibcode:2012QuInt.248...70T. doi:10.1016/j.quaint.2011.01.030.
- Shinoda, Ken-ichi; Adachi, Noboru (2017). "Ancient DNA Analysis of Palaeolithic Ryukyu Islanders" (PDF). Terra Australis. Canberra, Australia: ANU Press. 45: 51–59. Retrieved August 6, 2017.
- Matsu'ura, Shuji (1999). "A Chronological Review of Pleistocene Human Remains from the Japanese Archipelago" (PDF). Interdisciplinary Perspectives on the Origins of the Japanese: 181–197. Archived from the original (PDF) on August 7, 2018. Retrieved August 6, 2017.
- Nakagawa, Ryohei (2010). "Pleistocene human remains from Shiraho-Saonetabaru Cave on Ishigaki Island, Okinawa, Japan, and their radiocarbon dating". Anthropological Science. The Anthropological Society of Nippon. 118 (3). Retrieved August 6, 2017.
- Jomon Fantasy: Resketching Japan's Prehistory. June 22, 1999.
- Habu, 42.
- Habu 2004, p. 3, 258.
- Timothy Jinam; Hideaki Kanzawa-Kiriyama; Naruya Saitou (2015). "Human genetic diversity in the Japanese Archipelago: dual structure and beyond". Genes & Genetic Systems. 90 (3): 147–152. doi:10.1266/ggs.90.147. PMID 26510569.
- Robbeets, Martine (2015), Diachrony of Verb Morphology: Japanese and the Transeurasian Languages, De Gruyter, p. 26, ISBN 978-3-11-039994-3
- Kidder, 59.
- Kuzmin, Y. V. (2006). "Chronology of the Earliest Pottery in East Asia: Progress and Pitfalls". Antiquity. 80 (308): 362–371. doi:10.1017/s0003598x00093686.
- Seiji Kobayashi. "Eastern Japanese Pottery During the Jomon-Yayoi Transition: A Study in Forager-Farmer Interaction". Kokugakuin Tochigi Junior College. Archived from the original on September 23, 2009.
- Batten, 60.
- Kumar, 1.
- Silberman et al., 154–155.
- Schirokauer et al., 133–143.
- Shōda, Shinya (2007). "A Comment on the Yayoi Period Dating Controversy". Bulletin of the Society for East Asian Archaeology. 1.
- Imamura, 168–170.
- Kaner, 462.
- Yoshio Tsuchiya (1998). "A Brief History of Japanese Glass". Glass Art Society. Archived from the original on September 23, 2017. Retrieved September 1, 2015.
- Maher, 40.
- Henshall, 11–12.
- Henshall, 13.
- Kanzawa-Kiriyama, H.; Kryukov, K.; Jinam, T. A.; Hosomichi, K.; Saso, A.; Suwa, G.; Ueda, S.; Yoneda, M.; Tajima, A.; Shinoda, K. I.; Inoue, I.; Saitou, N. (June 1, 2016). "A partial nuclear genome of the Jomons who lived 3000 years ago in Fukushima, Japan". Journal of Human Genetics. 62 (2): 213–221. doi:10.1038/jhg.2016.110. PMC 5285490. PMID 27581845.
- "'Jomon woman' helps solve Japan's genetic mystery". NHK World – Japan News. Retrieved July 11, 2019.
- Farris, 3.
- Song-Nai Rhee et al., "Korean Contributions to Agriculture, Technology, and State Formation in Japan", Asian Perspectives, Fall 2007, 431.
- Henshall, 14–15.
- Henshall, 15–16.
- Totman, 102.
- "堺市・仁徳陵古墳". kiis.or.jp. Retrieved February 23, 2017.
- "Mozu-Furuichi Kofungun, Ancient Tumulus Clusters". UNESCO. Retrieved November 22, 2015.
- Henshall, 16, 22.
- Totman, 103–104.
- Kodansha Encyclopedia of Japan Volume One (New York: Kodansha, 1983), 104–106.
- Perez, 16, 18.
- Kidder, J. Edward (1964). Early Japanese Art: The Great Tombs and Treasures. D Van Nostrand Company Inc. p. 105.
- Gary L. Ebersole, "The Religio-Aesthetic Complexes in Manyoushuu Poetry, with Special Reference to Hitomaro’s Aki-no-no Sequence," History of Religions, 23:1 (August 1983):18–36, pp. 28–33; Ebersole (p. 34)
- "New Discovery About Persians in Ancient Japan Generates Excitement". October 10, 2016. Retrieved June 1, 2019.
- Totman, 106.
- Henshall, 18–19.
- Weston, 127.
- Song-Nai Rhee et al., "Korean Contributions to Agriculture, Technology, and State Formation in Japan", Asian Perspectives, Fall 2007, 445.
- Totman, 107–108.
- Sansom, 57.
- Sansom, 68.
- Masahide Bito; Akio Watanabe (1984). A Chronological History of Japanese History. Tokyo: International Society for Educational Information. ASIN B00DHO8S1Y.
- Henshall, 24.
- Henshall, 56.
- Keene 1999: 85, 89.
- Totman, 140–142.
- Henshall, 26.
- Deal and Ruppert, 63–64.
- Farris, 59.
- "Todaiji". Ancient History Encyclopedia. Retrieved April 7, 2019.
- Sansom, 99.
- Henshall, 29–30.
- Suzanne Austin Alchon (2003). A Pest in the Land: New World Epidemics in a Global Perspective. UNM Press. p. 21. ISBN 978-0-8263-2871-7.
- Totman, 149–151.
- Keene 1999 : 306.
- Totman, 151–152.
- Perez, 25–26.
- Henshall, 31.
- Totman, 153.
- Farris, 87.
- McCullough, 30–31.
- Meyer, 62.
- Sansom, 249–250.
- Takeuchi, 675–677.
- Henshall, 31–32.
- Henshall, 33–34.
- Henshall, 28.
- Totman, 186–187.
- Keene 1999: 477–478.
- Meyer, Milton W., page 44.
- Henshall, 30.
- Totman, 183.
- Henshall, 34–35.
- Perkins, 20.
- Weston, 139.
- MyPaedia article "Heishi".
- Weston 135–136.
- Keene 1999 : 892–893, 897.
- Weston, 137–138.
- Henshall, 35–36.
- Perez, 28–29.
- Keene 1999 : 672, 831.
- Totman, 156.
- Sansom, 441–442.
- Henshall, 39–40.
- Henshall, 40–41.
- Farris, 141–142, 149.
- Farris, 144–145.
- Perez, 32–33.
- Keene 1999 : 321.
- Keene 1999 : 320.
- Keene 1999 : 323.
- Keene 1999 : 321–322.
- Keene 1999 : 324.
- Keene 1999 : 650–651.
- Keene 1999 : 674.
- Keene 1999 : 673–674.
- Keene 1999: 657.
- Keene 1999 : 643.
- Keene 1999 : 680.
- Keene 1999 : 681.
- Keene 1999 : 700.
- Keene 1999 : 700–701.
- Keene 1999 : 702–703.
- Keene 1999 : 701.
- Keene 1999 : 736.
- Keene 1999 : 735–736.
- Keene 1999 : 617, 629.
- Keene 1999 : 637; Keene 1998 : 415.
- Keene 1999 : 613–615.
- Henshall, 41.
- Henshall, 43–44.
- Perez, 37.
- "Kinkaku-ji in Kyoto". Asano Noboru. Retrieved July 15, 2010.
- Bornoff, Nicholas (2000). The National Geographic Traveler: Japan. National Geographic Society. ISBN 0-7922-7563-2.
- Scott, David (1996). Exploring Japan. Fodor's Travel Publications, Inc. ISBN 0-679-03011-5.
- Totman, 240–241.
- Perez, 46.
- Stephen Turnbull and Richard Hook, Samurai Commanders (1) (Oxford: Osprey, 2005), 53–54.
- Stephen Turnbull and Richard Hook, Samurai Commanders (2) (Oxford: Osprey, 2005), 50.
- Louis Perez, "Ninja," in Japan at War : An Encyclopedia, ed. Louis Perez (Santa Barbara, California: ABC-CLIO, 2013), 277–278.
- Perez, 39, 41.
- Henshall, 45.
- Perez, 46–47.
- Farris, 166.
- Farris, 152.
- Perez, 40.
- Perez, 43–45.
- Harold Bolitho, "Book Review: Yoshimasa and the Silver Pavilion," The Journal of Asian Studies, August 2004, 799–800.
- Kodansha Encyclopedia of Japan Volume One (New York: Kodansha, 1983), 126.
- Henshall, 46.
- Perez, 48–49.
- Weston, 141–143.
- Henshall, 47–48.
- Farris, 192.
- Perez, 51–52.
- Farris, 193.
- Henshall, 50.
- Hane, 133.
- Perez, 72.
- Henshall, 53–54.
- Henshall, 54–55.
- Turnbull, Stephen. The Samurai Capture a King: Okinawa 1609. Osprey Publishing, 2009. p. 13.
- Kerr, 162–167.
- Totman, 297.
- McClain, 26–27.
- Henshall, 57–58.
- Perez, 62–63.
- Totman, 308.
- Perez, 60.
- Henshall, 60.
- Martha Chaiklin, "Sakoku (1633–1854)", in Japan at War: An Encyclopedia, ed. Louis Perez (Santa Barbara, California: ABC-CLIO, 2013), 356–357.
- Henshall, 61.
- Totman, 317, 337.
- Totman, 319–320, 322.
- Jansen, 116–117.
- Perez, 67.
- Henshall, 64.
- Jansen, 163–164.
- Baten, Jörg (2016). A History of the Global Economy. From 1500 to the Present. Cambridge University Press. p. 177. ISBN 9781107507180.
- Dalby, Liza (2000). Little Songs of the Geisha (2nd ed.). Massachusetts: Tuttle Publishing. pp. 14–15. ISBN 0-8048-3250-1.
- Hane, 213-214.
- Hane, 200.
- Hane, 201–202.
- Deal, 296.
- Henshall, 68–69.
- Totman, 367–369.
- McClain, 123–124, 128.
- Sims, 8–9.
- Perez, 79–80.
- Walker, 149–151.
- Hane, 168–169.
- Perez, 84–85.
- Henshall, 70.
- Totman, 380, 382.
- Gordon 2009, pp. 55–56.
- Takano, p. 256.
- Henshall, 71, 236.
- Henshall, 75.
- Henshall, 78.
- Morton and Olenike, 171.
- Weston, 172–173.
- Henshall, 75–76, 217.
- Henshall, 79–89.
- W. Dean Kinzley, "Merging Lines: Organising Japan's National Railroad, 1906–1914", Journal of Transport History, 27#2 (2006)
- Totman, 401.
- Henshall, 84–85.
- Henshall, 81.
- Henshall, 83.
- Totman, 460–461.
- Lauerman, 421.
- Totman, 464–465.
- Henshall, 103.
- Weston, 254–255.
- Totman, 466.
- Mason and Caiger, 315.
- Henshall, 85–92.
- Bix, 27, 30.
- F.H. Hinsley, ed. The New Cambridge Modern History, Vol. 11: Material Progress and World-Wide Problems, 1870–98 (1962) contents pp 464–86
- Kerr, 356–360.
- Perez, 98.
- Henshall, 80.
- Totman, 422–424.
- Perez, 118–119.
- Perez, 120.
- Perez, 115, 121.
- Perez, 122.
- Henshall, 96–97.
- Henshall, 101–102.
- Henshall, 99–100.
- Sheldon Garon, "Rethinking Modernization and Modernity in Japanese History: A Focus on State-Society Relations" Journal of Asian Studies 53#2 (1994), pp. 346-366 online
- Perez, 102–103.
- Hunter, 3.
- Totman, 403–404, 431.
- Totman, 440–442.
- Totman, 452–453.
- Perez, 134.
- Totman, 443.
- Kerim Yasar, Electrified Voices: How the Telephone, Phonograph, and Radio Shaped Modern Japan, 1868–1945 (2018)
- Henshall, 108–109.
- Perez, 135–136.
- Meyer, 179, 193.
- Large, 160.
- Perez, 138.
- Totman, 471, 488–489.
- Henshall, 111.
- Henshall, 110.
- Totman, 520.
- Totman, 525.
- Totman, 522–523.
- Totman, 524.
- Totman, 583.
- Sims, 139.
- Sims, 179–180.
- Perez, 139–140.
- Henshall, 114–115.
- Henshall, 115–116.
- McClain, 454.
- Henshall, 119–120.
- Henshall, 122–123.
- Henshall, 123–124.
- Weston, 201–203.
- Totman, 553–554.
- Totman, 555–556.
- Henshall, 124–126.
- Henshall, 129–130.
- Henshall, 132–133.
- Henshall, 131–132, 135.
- Frank, 28–29.
- Henshall, 134.
- Perez, 147–148.
- Morton and Olenike, 188.
- Totman, 562.
- "The Cornerstone of Peace." Kyushu-Okinawa Summit 2000: Okinawa G8 Summit Host Preparation Council, 2000. Accessed December 9, 2012. "The Cornerstone of Peace – number of names inscribed". Okinawa Prefecture. Archived from the original on September 27, 2011. Retrieved October 7, 2015.
- Feifer, xi, 446–463.
- Coox, 368.
- Henshall, 136–137.
- Max Fisher (August 15, 2012). "The Emperor's Speech: 67 Years Ago, Hirohito Transformed Japan Forever". The Atlantic. Retrieved October 25, 2015.
- Henshall, 142–143.
- Perez, 151–152.
- Henshall, 144.
- Perez, 150–151.
- Totman, 570.
- Mackie, 121.
- Henshall, 145–146.
- Totman, 571.
- Henshall, 147–148.
- Henshall, 150.
- Henshall, 145.
- Henshall, 158.
- Klein, Thomas. "The Ryukyus on the Eve of Reversion". Pacific Affairs. Vol. 45, No. 1 (Spring, 1972). p. 120.
- 沖縄県の基地の現状(in Japanese), Okinawa Prefectural Government
- 沖縄に所在する在日米軍施設・区域(in Japanese), Japan Ministry of Defense
- Perez, 156–157, 162.
- Perez, 159.
- Perez, 163.
- Henshall, 163.
- Henshall, 154–155.
- Henshall, 156–157.
- Henshall, 159–160.
- Perez, 169.
- Henshall, 161–162.
- Henshall, 162, 166, 182.
- Totman, 576.
- Justin McCurry; Julia Kollewe (February 14, 2011). "China overtakes Japan as world's second-largest economy". The Guardian.
- Gao 2009, p. 303.
- Totman, 584–585.
- Henshall, 160–161.
- Gao 2009, p. 305.
- Henshall, 167.
- Ito, 60.
- Totman, 580–581.
- Togo, 234–235.
- Togo, 162–163.
- Togo, 126–128.
- Perez, 177–178.
- Totman, 669.
- Henshall, 181–182.
- Henshall, 185–187.
- Meyer, 250.
- Leika Kihara (August 17, 2012). "Japan eyes end to decades long deflation". Reuters.
- Totman, 678.
- Henshall, 182–183.
- Henshall, 189–190.
- "Japan election: Shinzo Abe and LDP in sweeping win – exit poll". BBC News. December 16, 2012. Retrieved August 10, 2015.
- Henshall, 199.
- Henshall, 199–201.
- Henshall, 197–198.
- Henshall, 191.
- Obe, Mitsuru (September 18, 2015). "Japan Parliament Approves Overseas Military Expansion". The Wall Street Journal. Retrieved November 27, 2015.
- Henshall, 204.
- Ugai, Yagi & Wakai 2012, p. 140.
- Henshall, 187–188.
- Lee, Jason. "Robotic Evolution." Sex Robots. Palgrave Macmillan, Cham, 2017. 1–17.
- Kawanishi, Yuko. Mental Health Challenges Facing Contemporary Japanese Society. Brill, 2009.
- Farris, 96.
- Farris, 152, 181.
- Farris, 152, 157.
- Tonomura, 352.
- Tonomura, 351.
- Tonomura, 353–354.
- Tonomura, 354–355.
- Farris, 162–163.
- Farris, 159–160.
- Tonomura, 360.
- Hastings, 379.
- Totman, 614–615.
- Wakita 1991, p. 123.
- Neary 2009, p. 390.
- Henshall 2012, p. 56.
- Neary 2009, p. 391.
- Neary 2009, p. 392.
- Neary 2009, p. 393.
- Moriguchi and Saez, 80, 88.
- Neary, 397.
- Duus, 21.
- Neary 2003, p. 269.
- Neary 2003, p. 270.
- Neary 2003, p. 271.
- Batten, Bruce Loyd (2003). To the Ends of Japan: Premodern Frontiers, Boundaries, and Interactions. Honolulu, HI: University of Hawaii Press. ISBN 978-0-8248-2447-1.
- Bix, Hebert P. (2000). Hirohito and the Making of Modern Japan. New York, NY: Harper Collins. ISBN 978-0-06-186047-8.
- Coox, Alvin (1988). "The Pacific War," in The Cambridge History of Japan: Volume 6. Cambridge: Cambridge University Press.
- Deal, William E (2006). Handbook to Life in Medieval and Early Modern Japan. New York: Facts on File.
- Deal, William E and Ruppert, Brian Douglas (2015). A Cultural History of Japanese Buddhism. Chichester, West Sussex : Wiley Blackwell.
- Duus, Peter (2011). "Showa-era Japan and beyond," in Routledge Handbook of Japanese Culture and Society. New York: Routledge.
- Farris, William Wayne (2009). Japan to 1600: A Social and Economic History. Honolulu, HI: University of Hawaii Press. ISBN 978-0-8248-3379-4.
- Farris, William Wayne (1995). Population, Disease, and Land in Early Japan, 645–900. Cambridge, Massachusetts: Harvard University Asia Center. ISBN 978-0-674-69005-9.
- Feifer, George (1992). Tennozan: The Battle of Okinawa and the Atomic Bomb. New York: Ticknor & Fields.
- Frank, Richard (1999). Downfall: The End of the Imperial Japanese Empire. New York, NY: Random House. ISBN 978-0-14-100146-3.
- Gao, Bai (2009). "The Postwar Japanese Economy". In Tsutsui, William M. (ed.). A Companion to Japanese History. John Wiley & Sons. pp. 299–314. ISBN 978-1-4051-9339-9.
- Garon, Sheldon. "Rethinking Modernization and Modernity in Japanese History: A Focus on State-Society Relations" Journal of Asian Studies 53#2 (1994), pp. 346-366 online
- Habu, Junko (2004). Ancient Jomon of Japan. Cambridge, MA: Cambridge Press. ISBN 978-0-521-77670-7.
- Hane, Mikiso (1991). Premodern Japan: A Historical Survey. Boulder, CO: Westview Press. ISBN 978-0-8133-4970-1.
- Hastings, Sally A. (2007). "Gender and Sexuality in Modern Japan," in A Companion to Japanese History. Malden, Massachusetts: Blackwell Publishing.
- Henshall, Kenneth (2012). A History of Japan: From Stone Age to Superpower. London: Palgrave Macmillan. ISBN 978-0-230-34662-8.
- Hunter, Janet (1984). Concise Dictionary of Modern Japanese History. Berkeley: University of California Press.
- Imamura, Keiji (1996). Prehistoric Japan: New Perspectives on Insular East Asia. Honolulu: University of Hawaii Press.
- Ito, Takatoshi (1992). The Japanese Economy. Cambridge, Massachusetts: MIT Press.
- Jansen, Marius (2000). The Making of Modern Japan. Cambridge, Massachusetts: Belknap Press of Harvard University.
- Kaner, Simon (2011). "The Archaeology of Religion and Ritual in the Japanese Archipelago," in The Oxford Handbook of the Archaeology of Ritual and Religion. Oxford: Oxford University Press.
- Keene, Donald (1998) . A History of Japanese Literature, Vol. 3: Dawn to the West – Japanese Literature of the Modern Era (Fiction) (paperback ed.). New York, NY: Columbia University Press. ISBN 978-0-231-11435-6.
- Keene, Donald (1999) . A History of Japanese Literature, Vol. 1: Seeds in the Heart – Japanese Literature from Earliest Times to the Late Sixteenth Century (paperback ed.). New York, NY: Columbia University Press. ISBN 978-0-231-11441-7.
- Kerr, George (1958). Okinawa: History of an Island People. Rutland, Vermont: Tuttle Company.
- Kidder, J. Edward (1993). "The Earliest Societies in Japan," in The Cambridge History of Japan: Volume 1. Cambridge: Cambridge University Press.
- Kumar, Ann (2008). Globalizing the Prehistory of Japan: Language, Genes and Civilisation. New York: Routledge.
- Large, Stephen S. (2007). "Oligarchy, Democracy, and Fascism," in A Companion to Japanese History. Malden, Massachusetts: Blackwell Publishing.
- Lauerman, Lynn (2002). Science & Technology Almanac. Westport, Connecticut: Greenwood Press.
- Mackie, Vera (2003). Feminism in Modern Japan. New York: Cambridge University Press.
- Maher, Kohn C. (1996). "North Kyushu Creole: A Language Contact Model for the Origins of Japanese," in Multicultural Japan: Palaeolithic to Postmodern. New York: Cambridge University Press.
- Mason, RHP and Caiger, JG (1997). A History of Japan. Rutland, Vermont: Tuttle.
- Meyer, Milton W. (2009). Japan: A Concise History. Lanham, Maryland: Rowman & Littlefield.
- McClain, James L. (2002). Japan: A Modern History. New York, NY: W. W. Norton & Company. ISBN 978-0-393-04156-9.
- McCullough, William H. (1999). "The Heian Court, 794–1070," in The Cambridge History of Japan: Volume 2. Cambridge: Cambridge University Press.
- Moriguchi, Chiaki and Saez, Emmanuel (2010). "The Evolution of Income Concentration in Japan, 1886–2005," in Top Incomes : A Global Perspective. Oxford: Oxford University Press.
- Morton, W Scott and Olenike, J Kenneth (2004). Japan: Its History and Culture. New York: McGraw-Hill.
- Neary, Ian (2003). "Burakumin at the End of History". Social Research. 70 (1): 269–294. JSTOR 40971613.
- Neary, Ian (2009). "Class and Social Stratification". In Tsutsui, William M. (ed.). A Companion to Japanese History. John Wiley & Sons. pp. 389–406. ISBN 978-1-4051-9339-9.
- Perkins, Dorothy. Encyclopedia of Japan : Japanese history and culture, from abacus to zori (1991) online free to borrow
- Perez, Louis G. (1998). The History of Japan. Westport, CT: Greenwood Press. ISBN 978-0-313-30296-1.
- Sansom, George (1958). A History of Japan to 1334. Stanford, CA: Stanford University Press. ISBN 978-0-8047-0523-3.
- Sanz, Nuria (2014). Human origin sites and the World Heritage Convention in Asia. UNESCO.
- Schirokauer, Conrad (2013). A Brief History of Chinese and Japanese Civilizations. Boston: Wadsworth Cengage Learning.
- Silberman, Neil Asher (2012). The Oxford Companion to Archaeology. New York: Oxford University Press.
- Sims, Richard (2001). Japanese Political History since the Meiji Restoration, 1868–2000. New York: Palgrave.
- Takeuchi, Rizo (1999). "The Rise of the Warriors," in The Cambridge History of Japan: Volume 2. Cambridge: Cambridge University Press.
- Togo, Kazuhiko (2005). Japan's Foreign Policy 1945–2003: The Quest for a Proactive Policy. Boston: Brill.
- Tonomura, Hitomi (2007). "Women and Sexuality in Premodern Japan," in A Companion to Japanese History. Malden, Massachusetts: Blackwell Publishing.
- Totman, Conrad (2005). A History of Japan. Malden, MA: Blackwell Publishing. ISBN 978-1-119-02235-0.
- Wakita, Osamu (1991). "The social and economic consequences of unification". In Hall, John Whitney (ed.). The Cambridge History of Japan. 4. Cambridge University Press. pp. 96–127. ISBN 978-0-521-22355-3.
- Walker, Brett (2015). A Concise History of Japan. Cambridge: Cambridge University Press.
- Weston, Mark (2002). Giants of Japan: The Lives of Japan's Greatest Men and Women. New York, NY: Kodansha. ISBN 978-0-9882259-4-7.
- Yasar, Kerim. Electrified Voices: How the Telephone, Phonograph, and Radio Shaped Modern Japan, 1868–1945 (2018)
- Furuya, Daisuke. "A Historiography in Modern Japan: the laborious quest for identity." Scandia: Tidskrift för historisk forskning 68.1 (2002). online in English
- Garon, Sheldon. "Rethinking Modernization and Modernity in Japanese History: A Focus on State-Society Relations" Journal of Asian Studies 53#2 (1994), pp. 346-366 online
- Huffman, James L. Modern Japan: an encyclopedia of history, culture, and nationalism. Taylor & Francis, 1998. ISBN 0-8153-2525-8
|
Elizabethan Poetry: Part I; Songs
Five members of the Tudor family ruled England from 1485 to 1603. Of those one hundred eighteen years, Queen Elizabeth I ruled for forty-five (1558-1603). During her reign, the religious, political, economic, and intellectual changes that had begun under her grandfather, Henry VII, and her father, Henry VIII, reached a climax. The result was a flourishing of the arts and patriotism. As Queen, Elizabeth not only ruled, but also gloriously represented the spirit of her times. Both she and her people loved and lived life with zest. The Elizabethan Age was one of exuberance and enthusiasm.
The medieval focus on life after death gave way to an Elizabethan emphasis on the here and now. Though still religious, Elizabeth's subjects vigorously pursued the pleasures and benefits of worldly living.
Religion itself had been a source of controversy and struggle in England since the reign of Henry VIII (at right). When the Pope refused to grant Henry a divorce from his Spanish wife, Catherine, so that he could marry Anne Boleyn, Henry cut ties with the Church in Rome and established himself as the head of the Anglican Church of England. Thus, Henry VIII introduced the Protestant Reformation, begun in Germany, to England. Though Henry generally maintained a balance between the Protestant and Catholic elements, his successors did not. The power struggle between religions accelerated under Henry's son and immediate successor, Edward VI, and under Mary, Henry's daughter by Catherine and successor to Edward. After Elizabeth, daughter of Anne Boleyn, took the throne, she definitively established the Anglican Church.
One of the greatest crises England encountered during Elizabeth's reign was an attack by the powerful Spanish navy. In July 1588, Philip II of Spain sent his Invincible Armada to invade England. The Spaniards lost over sixty-three ships and nine thousand men, and Spanish dominion of the seas was ended. England ruled the seas and her spirit of pride and patriotism soared.
The Elizabethan Age was a period of geographical explorations and expansion. Consequently, England emerged as a leader in the European race to build commercial empires. Trade with distant countries provided a new source of wealth to the middle class merchants.
Enjoying the spirit of success, England was an eager recipient of the spirit of "rebirth" or "reawakening" that was influencing the thought of sixteenth-century Europe.
This "rebirth," later labeled by historians as the Renaissance, was sparked by a renewed interest in the classics of ancient Greece and Rome. It also resulted in a burst of creativity in, and cultivation of, the fine arts; in a growth in the spirit of individualism; in an expansion of intellectual thought; and in a new insight into the purpose and significance of the human person.
The Renaissance emphasis on the magnificence and wonder of the individual person, as well as of the surrounding world, encouraged Elizabethans to consider life as more than a process of waiting for life after death. They believed that life was exciting and beautiful and should be enjoyed immediately. Shakespeare's Hamlet exclaims, "What a piece of work is man! how noble in reason! how infinite in faculties! in form and moving how express and admirable."
The Renaissance ideal expanded the concept of the individual to include all aspects—spiritual, rational, emotional, and physical—of the human personality. The Elizabethan exuberance, therefore, was a reflection of a seemingly limitless desire to know, to do, and to be.
The English literature of the Renaissance offers ample proof of the Elizabethan respect for life and beauty, wherever it may be found. In this unit, you will read and study the songs and sonnets of the poets. You will examine the development of the English drama and read one of Shakespeare's plays. Finally, you will analyze and criticize the play you have read.
Elizabethan poetry offers a variety of thoughts in words and rhythms that are pleasing to hear. The ease with which you may enjoy this literature could lead you to the mistaken conclusion that the writer's task is an easy one. In this section, you will analyze some of the devices the writer must use to create poetry and prose that are melodious and meaningful. Your familiarity with these devices, in turn, will aid you in interpreting what you read.
The exuberance of the Elizabethan Age often expressed itself in songs, some spontaneous and others carefully designed. The development of musical Instruments such as the virginal and viola da gamba complemented this impulse to sing. Nearly everyone in Elizabethan times could sing or play a musical instrument. In 1577, Richard Tottel published the first collection of songs and lyrics under the title Songs and Sonnets. This book, however, usually is called Tottel's Miscellany. Similar songbooks soon appeared, some with titles such as The Paradise of Dainty Devices and The Gorgeous Gallery of Gallant Inventions. Like these titles, many of the Elizabethan songs were decorative and elaborate; others, however, were clear and simple.
Elizabethan songs often alluded to Greek mythology. Such references are a natural way for Renaissance songwriters to express their admiration of classical times. In the poem "The Triumph of Charis" the poet used Charis as his subject. In Greek mythology, Charis is the personification of beauty and charm.
Figures of Speech
Personification is a figure of speech by which the author gives human forms and traits to something that is not human (inanimate object, animal, abstraction). Poets use personification to help sharpen the reader's interest and understanding.
In "Golden Slumbers Kiss Your Eyes," Thomas Dekker uses personification in the first and second lines: "Golden slumbers kiss your eyes,/ Smiles awake you when you rise."
If you thought that the "Song" from Cymbeline was the least simple and clear, your response was well founded. In that song, William Shakespeare uses an allusion, a figure of speech that can add a touch of sophistication. An allusion is a reference, direct or indirect, to a well-known literary, scriptural, or historical event or person. In "Song" from Cymbeline, Shakespeare refers to Phoebus, also known as Phoebus Apollo, the sun god of the ancient Greeks. To better understand the world around them, the Greeks frequently explained a natural-but still mysterious-phenomenon, such as the sun, by equating it with a god. In turn, to better understand this god, they personified him. Thus, the sun and the god who represented it are humanized. Shakespeare referred to this sun god when he spoke of Phoebus, as a person who "gins arise" to start the day.
In his effort to involve the reader, the poet often uses imagery; that is, he uses clear, concrete details that appeal to the reader's senses. An image is sometimes defined in literature classes as a "word picture." More exactly, an image is a word or phrase that encourages the reader to hear, touch, smell, taste, and see the poet's subjects.
In "Song" from Cymbeline, Shakespeare helps the reader to see the flowers by showing the shape (cup-shaped) of some and color (golden) of others.
Elizabethan poets frequently used an elaborate and exaggerated image called a conceit. In this figure of speech, the writer makes a comparison between two things that are normally considered very dissimilar.
Poets and songwriters often use alliteration to enhance the sound, and therefore the meaning, of their words. Alliteration is the repetition of the same consonant sound at the beginnings of successive or closely associated words or accented syllables.
In "Song" from Much Ado About Nothing,, Shakespeare used alliteration when he repeated the d sound in the lines:
"Sing no more ditties, sing no moe / Of dumps so dull and heavy . . . "
Shakespeare also used alliteration in "Ariel's Song" from The Tempest.
While reading this song, keep in mind that the elfin sprite, Ariel, is singing a song to Prince Ferdinand to tell him that his father had drowned.
Elizabethan Poetry: Part II; Sonnets
Lyric poetry is highly subjective. It expresses the feeling or attitude of the poet. The sonnet is a specialized type of lyric poetry that was popular in Elizabethan England.
The sonnet had its origin in Italy (the word means "little song" in Italian) where it had been perfected by the poet Petrarch. Introduced into England in the early sixteenth century by Thomas Wyatt and the Earl of Surrey, the sonnet soon became a literary fashion.
Sir Philip Sidney
Sir Philip Sidney (1554-1586) was a poet, critic, scholar, diplomat, courtier, and soldier. He offers an example of the ideal Renaissance man. At 32, Sidney took part in a military expedition to Holland. He was fatally wounded during a skirmish there, and, according to a traditional story, he offered his own bottle of water to another dying man because he believed the soldier's need was greater than his own. He died as he had lived: as a gentleman.
With his Astrophel and Stella, Sidney sparked the popularity of writing sonnet sequences. He addressed his sequence to Penelope Devereux; but gave his lady a name taken from Greek and Roman literature, as did some of the other Elizabethan poets. "Stella" means star; "Astrophel'' means star-lover. Like most other Elizabethan sonnet sequences, Astrophel and Stella focuses on the poet's love for a beautiful woman. The sonnets you will read are two of the 108 sonnets in Astrophel and Stella.
Elizabethan Poetry: Part III; Sonnets
Considered one of the greatest English poets, Edmund Spenser (1552-1599)was the successor to Chaucer in the development of English poetry. After his graduation from Cambridge, Spenser spent four years in the household of the Earl of Leicester, Philip Sidney's uncle and a favorite courtier of Queen Elizabeth. Then he accepted a government assignment in Ireland, where he remained until shortly before his death.
Spenser's most famous work is The Faerie Queen, the longest poem in the English language. The "Faerie Queen" of the poem is Gloriana, a symbol of Queen Elizabeth to whom he dedicated this masterpiece.
The name of Spenser's sonnet sequence is Amoretti ("Little Loves"). Spenser's sonnets sprang from a real, personal love for Elizabeth Boyle, his future wife.
Spenser wrote in a quaint and archaic language; therefore, his poetry is often reprinted in Elizabethan spelling to give a true representation of his style. You should have no trouble understanding the sonnets if you pronounce the words as they are spelled and keep in mind that a u is used as a w or a v.
The most widely known Elizabethan writer, William Shakespeare (1564-1616), wrote one hundred fifty-four sonnets. These sonnets may be divided into two groups: the first group (Sonnets 1-126) he addressed to a dear friend, a young man of noble birth; the second group (Sonnets 126-152) he addressed to an unkind "Dark Lady" whom the poet loved in spite of her unworthiness. The last two sonnets do not belong to either group. Although friendship and love are the general themes, the sonnets offer a wide range of related themes and moods. Shakespeare frequently considered the destruction caused by the passing of time and the poet's power to immortalize beauty and love in his verse.
John Donne (1573-1631) is a poet whose work clearly shows the changes in his own approach to life. John Donne's early poetry reflected his youthful quest for worldly pleasures. His later writings reflected his own search for spiritual satisfaction.
Unlike the sonnets of earlier Renaissance poets, Donne's sonnets offer an exercise in religious meditation, and as such, help form a bridge between the poetry of the Renaissance and that of the seventeenth century.
Theatre in England
When you attend a play today, whether a class play or a professional production of Hamlet, you are probably comfortable in the familiar theater environment. You sit in a rectangular auditorium and the actors perform on a picture-frame stage is recessed into the wall and hidden from view until the play begins. Before the curtain rises, you read the program and become familiar with the characters, the actors, the acts, and the scenes. During the performance, you are aware not only of the onstage actions and words but also of costumes, scenery, and shades and strengths of lighting. At the play's end, you applaud in appreciation of the entertainment you have enjoyed.
The theater that you attend today is unlike the earliest English dramatic presentations, yet it is the result of a slow, and generally logical, growth of English drama.
The origins of English drama are the church plays of the Middle Ages. These plays were a part or an extension of the church service. Their original purpose was to help the uneducated congregation understand the Latin masses (i.e., the readings of scripture were acted out so that those who could not understand the language could still get the meaning). The lines of the plays, in their earliest form, were chanted or sung. Eventually, the plays became so elaborate and so humorous that they were no longer appropriate for this setting, and were therefore, transferred outdoors. Once outside, both the plays and the audiences lost more and more of their formality. Latin gave way to the native language. Plays grew more humorous and lighthearted, and audiences became more rambunctious; eventually, churches forbade the productions.
The plays found new support from the town authorities who used the trade guilds as dramatic companies. Guildsmen provided money for costumes, stage properties, and actors' wages. The plays came to be called mystery plays. Although secularized in production, the plays were based on the Bible. The plays were performed in a cycle, a series of short plays that formed one long narrative.
Sometimes the individual plays within the cycle were performed on fixed stages or stations, and the crowds moved from station to station to see the entire cycle. Usually, however, individual plays were performed on separate wagons that moved to the spectators who were gathered at various predetermined locations in the city. Moving in succession, the wagons brought the entire cycle to the waiting crowds. These wagons were called pageants. The design of a pageant usually reflected its purpose and relationship to a specific guild.
Another dramatic form of medieval England was the miracle play. Although similar to the mystery plays, the miracle plays were based on the lives of the saints.
Developed in the Late Middle Ages, the morality play was a dramatized allegory. In it, abstract virtues and vices—like mercy and shame—were personified. The most famous morality play is Everyman, in which Everyman, who represents all people, receives a summons from Death.
The interlude is a dramatic form that is considered a transition from medieval morality plays to Elizabethan drama. The original definition of an interlude is unknown, but it is believed to have begun during the reign of Henry VIII as a brief skit between the courses of a banquet. The word ultimately suggested a play brief enough to be presented between events of a dramatic performance, entertainment, or feast.
Court interludes were realistic and humorous and intended primarily to amuse. John Heywood was the best-known writer of interludes; his most famous one is The Four P's. This interlude presents a debate among a Palmer, a Pardoner, and a Pothecary. A Pedlar acts as a judge to determine who can tell the biggest lie. The Palmer wins when he claims that he never saw a woman lose her temper. (A palmer was a type of religious pilgrim; a pardoner sold indulgences; a pothecary was an early pharmacist; and a pedlar, now spelled peddler, was a traveling salesperson.)
Some of the interludes both developed from and resembled morality plays. The purpose of these educational interludes was to teach a moral.
Everyday details and a realistic approach were characteristics of the interlude. Although following the allegorical pattern of the morality plays, the interlude began the move away from personifications of abstracts toward portrayal of individual characters. Most importantly, the court interludes strongly suggested that comic elements that simply amuse and do not instruct had recognizable value.
The first Elizabethan playhouses resembled the inn yards. These early theaters were eight-sided buildings with an unroofed yard in front of the stage and two or three tiers of covered galleries lining the walls. The "groundlings," who paid one penny, occupied the open yard. Known also as the "stinkards," the groundlings were a loud, raucous bunch. The more sophisticated people could spend another penny and get a seat in one of the galleries.
The Elizabethan playhouses transformed the crude platform of the inn yards into a permanent, three-sided stage that jutted almost halfway out into the theater. This physical closeness between the actors and their audience encouraged the audience to get emotionally involved in the action rather than to just sit and watch. In fact, the groundlings sometimes got physically involved. Depending on their response to specific actions or characters, the groundlings would hiss, boo, applaud, throw vegetables and fruits, or even jump onto the edge of the stage. An Elizabethan audience was a lively, but attentive, one.
The Elizabethan stage had no front curtain to mark the ends of acts and scenes. Nor did playgoers receive printed programs that outlined the time and place of each scene. The stage was usually bare. The scenes and setting depended primarily on the playwright's descriptions—through the words of his characters—and the audience's imagination. A curtained area at the rear of the stage served as a tomb, a tent, or any form of inner room or secluded area. Balconies, one directly above the curtained area and others on the sides of the stage, represented elevated places. A trapdoor in the stage provided an entrance for ghosts and evil spirits from the underworld. A similar trapdoor in the canopy over the stage, called the "heavens," provided a way to lower angels and good spirits on ropes. Neither the stage nor the theater used any artificial lighting. Depending entirely on natural light, the plays had to be performed in the afternoon.
The first English playhouse was built in 1576 by the Elizabethan actor James Burbage. It was simply called "The Theatre." Most people associate William Shakespeare with the playhouse named the Globe. Built in 1599, the Globe was the most impressive theater of its time. Shakespeare's greatest plays, including Hamlet, Macbeth, and King Lear, were first performed in the Globe.
Women had no part in Elizabethan plays. Boys, frequently choirboys, performed all the women's roles. Some of these youths were excellent on stage, and the Elizabethan audience was seemingly content with the absence of actresses.
Actors, popular though they were, did not have a favorable reputation. Classified by some local laws as vagabonds, actors had to seek means to protect themselves from punishments imposed on such persons. Some local authorities also persisted in efforts to close down the theaters. Thus, the actors formed groups or companies that were organized under the patronage of some member of the nobility who could and would protect their interests. In spite of their low social standing, actors provided one of the favorite forms of entertainment at court. To perform at court by royal command was one of the greatest honors a company could enjoy.
Although the scenery of an Elizabethan theater was scanty, costuming was plentiful and extravagant. Because the jutting stage brought the actor physically so close to his audience, he wore the best and most colorful of materials.
Appropriate costuming helps the audience to identify characters. Elizabethan costuming, however, was frequently anachronistic. The acting companies were not overly concerned about duplicating historical dress. In an Elizabethan production of Julius Caesar, Antony and Brutus may have appeared in Roman robes, or, more likely, could have worn the clothes of Renaissance noblemen. Such incongruities did not disturb the audience.
The characters and plots of Elizabethan plays reflected the Renaissance belief that individual human beings were appropriate and exciting subjects for close examination. The Elizabethan plays were not copies of the medieval personifications of a single personality trait, such as justice or greed, involved in a single, simplified conflict for the possession of souls. Instead, Elizabethan drama discarded the one-dimensional characters of morality plays and portrayed real-life men and women involved in real-life conflicts. The Elizabethan audiences saw characters who were strong one minute and weak the next; they saw characters who struggled to learn the significance of their existence. The Elizabethan audiences saw people on stage who were much like themselves. This fascinating approach to drama did not lack for writers. Many playwrights were eager to create lifelike characters. One of the finest, and definitely the best known, of these playwrights is William Shakespeare.
The Elizabethan theater was not long-lived. In the middle of the seventeenth century, all playhouses were closed by demand of the Puritans. When they reopened at the end of the seventeenth century, they were drastically changed. The building itself was rectangular and roofed. The audiences saw for the first time artificial lighting, movable scenery, and women on stage. The stage no longer jutted into the audience area but instead receded into the wall. This new theater more closely resembled the theater of today than it did that of the Elizabethan Age
In a famous essay on drama, John Dryden, a seventeenth-century English poet, dramatist, and critic, said this of Shakespeare:
He was the man who of all modern, and perhaps ancient poets, had the largest and most comprehensive soul. All the images of nature were still [always] present to him, and he drew them not laboriously, but luckily; when he describes anything, you more than see it, you feel it too.
To make the reader feel is the highest goal of the creative writer. Only a person who knows and understands the human spirit can achieve it.
Shakespeare is probably the most widely read author in all English literature. Very little is known about him. The most reliable facts about Shakespeare are the recorded dates of important events in his life.
Shakespeare was born in 1564 at Stratford-on-Avon. His birth date is not known but is assumed to be April 23. He was the son of John Shakespeare, a glove-maker and tradesman, and Mary Arden, a woman of good background.
Presumably, Shakespeare attended the well-esteemed local grammar school and there learned Latin, some Greek, and read the works of Latin playwrights.
At eighteen, Shakespeare married Anne Hathaway, who was seven or eight years older than he. Their first child, Susanna, was born in 1583. Two years later, twins Hamnet and Judith were born. Hamnet died in childhood.
No record exists of Shakespeare's activities between 1585 and 1592. Some legends say that he taught school. Very probably, he left Stratford in 1585 and went to London, perhaps to begin his apprenticeship as an actor. By 1592, he was established enough in London to be the target of a public, written attack by a seemingly jealous playwright, Robert Greene.
By 1594, Shakespeare was both an actor and playwright with the company known as the Lord Chamberlain's Men. Popular at Elizabeth's court, the Lord Chamberlain's Men outlived the Queen and became known as the King's Men under the patronage of her successor, James I. Shakespeare was a shareholder in the Lord Chamberlain's Company and, therefore, in the profits of its successful theater, the Globe. In addition, he made money—though not much—as an actor and from the sale of his plays to the company. By 1597, Shakespeare was wealthy enough to purchase New Place, the second largest house in Stratford. While Shakespeare's wife and children lived in New Place, Shakespeare continued to work (write, memorize, rehearse, act, manage the theater, and train actors) in London. Shakespeare's father died in 1601, the same year in which Hamlet was written. His mother died in 1607. Susanna was married in 1607 and Judith in 1616. In 1611, Shakespeare permanently retired to Stratford, though he continued to visit London. On March 25, 1616, he wrote his will. William Shakespeare died on April 23, 1616. His wife, Ann, died on August 6, 1623.
Scholars generally agree that Shakespeare wrote 38 plays. The three broad categories for his plays are comedies, histories, and tragedies. Comedies appear throughout his career. The earlier comedies, such as A Midsummer Night's Dream and Love's Labour's Lost, are light-hearted and filled with elaborate word play. The later comedies, such as All's Well That Ends Well and Measure for Measure, are known as his "dark comedies." Shakespeare's history, or chronicle, plays were about the past kings of England—Henry IV, V, and VI; Richard II and III and King John—and were written in the last decade of the sixteenth century when patriotic interest in the past was high. The period of Shakespeare's tragedies began with Julius Caesar, written at the turn of the century. The first decade of the seventeenth century saw the full flourish of his great tragedies: Hamlet, Othello, King Lear, Macbeth, and Antony and Cleopatra. Among Shakespeare's major tragedies with which you may be familiar, Romeo and Juliet was written earlier in 1596.
The last few of Shakespeare's plays, probably written at Stratford, are sometimes described as comedies but possess qualities of both tragedy and comedy. They are lighthearted and yet serious, imaginatively fanciful and yet thoughtfully symbolic. These plays include Pericles, Cymbeline, The Winter's Tale, and The Tempest. Shakespeare's last play was a history play, Henry VIII, published in 1613. Scholars generally considered it an inferior work and have some doubt that it was actually his work.
Enthusiastically interested in people, sharply aware of their thoughts and motivations, and expert at writing words that made them truly alive, Shakespeare was a playwright for people of all times. Although this universal appeal has won Shakespeare fame as a dramatist, Shakespeare was also a master poet.
Shakespeare's first two published poems were long narratives: Venus and Adonis and Lucrece. He wrote these poems between June 1592, and April 1594, when London theaters were closed because of the plague. Between 1593 and 1601, Shakespeare composed his series of 154 sonnets.
Other publications attributed to Shakespeare are The Passionate Pilgrim, a volume of twenty poems; The Phoenix and the Turtle, a 67-line poem; and A Lover's Complaint, a 329-line poem. Some scholars maintain that Shakespeare wrote only five of the twenty Pilgrim poems, and not all agree that A Lover's Complaint is Shakespeare's work.
Students are sometimes surprised to learn that Shakespeare's language is classified as Modern English. They often feel that his language is almost foreign. Indeed, reading Shakespeare is not necessarily easy, nor should it be done rapidly. A close examination, however, will show that although his vocabulary is sometimes puzzling, his grammar and spelling differ very little from their twentieth-century counterparts. You do not need a translation to read Shakespeare. You would need a translation to read Old English (Anglo-Saxon). Although you may be able to grasp Middle English (Medieval English), you would probably find a translation a welcome relief.
The emergence of an increasingly standardized English language marks the beginning of the Modern English Period (about 1500). Before the process of standardization, the transmission of language depended on oral exchange of various dialects and the time-consuming copying of manuscripts. As a result, language suffered from unavoidable inconsistencies. One of the major contributions toward a uniform English language was the introduction of the printing press by William Caxton in 1476. By Shakespeare's time, movable type made possible the relatively rapid circulation of thousands of identical copies of a book or play. Equally significant, opportunity for education had spread, which allowed more people to read the books that were available.
The fact that large numbers of people could see a consistency in written language and the fact that Caxton and other London printers employed only one of the many dialects in their publications helped to standardize the previous hodgepodge of spelling, grammar, and vocabulary. At the same time, improved transportation brought more people to the major cities. This migration brought about more standardized pronunciations. Many people, too, were aware that dialect could be a social class barrier and began to cultivate the refined speech of the upper class.
While the language of the Elizabethan Age was becoming less flexible and confusing, it was also expanding. During the Renaissance, thousands of words were introduced through the works of scholars and other writers. Many of these words were borrowed from other languages; some were simply created. Many of the borrowed words were from Greek and Latin, a logical result of the Renaissance interest in the ancient classics. Likewise, words borrowed from a variety of places around the world reflect the Renaissance desire for intellectual and geographical expansion and its resulting exchange of words and ideas.
Although you will not need a translation to read Shakespeare, you will need to pay careful attention to all footnotes and notations given in the edition you choose. Since Shakespeare's day, new words have entered the language; others have become obsolete. (If you looked up these obsolete words in today's dictionary, you would find the word archaic or the abbreviation Obs. in brackets to indicate that these words are no longer used.) Finally, some words remain, but have changed their meaning.
Hamlet: Act I, i-ii
Hamlet is one of William Shakespeare's most famous plays. In fact, you may be more familiar with it than you realize. If you have ever said, "Something's rotten in Denmark" or "He has a method in his madness" you were actually paraphrasing lines from Hamlet. Numerous quotations from the play provide the source for popular clichés today.
The complete title of the play is The Tragedy of Hamlet, Prince of Denmark. Different ages have had different interpretations of the term tragedy, but most agree on the basic definition that a tragedy is a serious play with an unhappy ending. This definition may be extended to include three characteristics. First, a tragedy ends in an unhappy catastrophe, usually with the death or some other form of destruction of the hero or heroine. Second, this final disaster is neither contrived nor an accident; instead it is the inevitable result of previous events and conflicts. Finally, this entire account of conflicts culminating in catastrophe must be regarded by both the playwright and the audience as significant and serious.
The purpose of tragedy, however, is not to depress the audience. On the contrary, tragedy intends to arouse the emotions of pity and fear and therefore to produce in the audience a catharsis, or purgation, of these emotions. The problems examined in a tragedy are universal ones—they are problems of all people, everywhere. As members of the audience feel with the suffering hero and share in his fear, they become emptied of pity and fear; at the play's end, they should be ready to begin anew their own lives with a sense of calm, a sense of emotional understanding.
During the Elizabethan Age, the theory of tragedy emphasized the idea that the downfall of the hero was caused by a personal error or character flaw. This error could be the result of bad judgment, inherited weakness, or many other causes. Ultimately, therefore, the catastrophe resulted not from the hero's intentional wrongdoing or even from forces outside himself, but from a flaw in his character. This flaw is called the tragic flaw. As you read Hamlet, you will be able to develop more fully a definition of Shakespeare's presentation of tragedy.
Hamlet is the son and namesake of a medieval king of Denmark. When the play opens, Hamlet's father, King Hamlet, is dead. The new king is not young Hamlet but rather the dead king's brother, Claudius. The setting of the first scene is the guard post in front of the king's castle at Elsinore, a Danish seaport.
In order to help you understand the text better, look up the following words and keep them in mind while reading Act I, scene i: apparition, assail, portentous, and invulnerable.
The medieval characters in this play believe in ghosts, as did many people in Shakespeare's audience. A ghost, however, always presented a problem. The persons who saw it had to determine its nature and purpose. Some accepted theories were that a ghost may be simply a trick of one's mind; that it may be a spirit returning to complete a task left unfinished at death; that it may be a blessed spirit who returns with divine permission; or that it may be an evil spirit—a devil—returning in the form of a person already dead.
The setting of the next scene (Act I, scene ii) is the following day in one of the staterooms of Elsinore Castle. The scene opens with Claudius's speech to members of his court in which he touches on issues related to Gertrude, Fortinbras, Laertes, and Hamlet.
Medieval church law forbade a man to marry his dead brother's wife. In Act I, scene ii, therefore, Claudius must explain his actions. Indeed, he delivers a fine speech to convince his court that his hasty marriage to Gertrude was motivated by a sense of public duty. Hamlet, however, remains unconvinced and unconsoled. He is disgusted and disillusioned by the marriage.
Shakespeare used several dramatic techniques in this scene to give the audience or reader an insight into Hamlet's general state of mind. One of them is the pun. A pun is a play on words that sound alike but have different meanings. Frequently when people pun today, they are trying to be funny; and perhaps just as frequently, "the audience" regards the pun as only slightly humorous, or less. For example, if a sign saying, "alms for the pour" were attached to a small saucer and strategically placed next to the coffee pot in the faculty lounge, probably more than one teacher would groan, and not just because of the coins he or she should drop in the saucer to pay for a cup of coffee.
The Elizabethans, however, used puns for serious as well as comic purposes. When Claudius calls Hamlet "my cousin" and "my son," Hamlet responds with "A little more than kin, and less than kind." He plays on the word kin, which means relatives or family; in Shakespeare's day "kind" could mean kinship. Hamlet bitterly implies that he and Claudius are more than kin for they are doubly related—nephew/uncle and stepson/stepfather—but they do not share the love and kindness that normally exists in kinship. Claudius did not hear this pun because Hamlet said it in an aside; that is, Hamlet turned his head to the audience when he spoke so that his words were heard only by the audience and not by the characters on stage.
Another method that Shakespeare used to probe Hamlet's thoughts is a soliloquy, a technique in which a character talks aloud to himself while he is alone, or believes he is alone, on stage. Hamlet's soliloquy in Scene 2 begins with the line "O that this too too solid flesh would melt," and continues to reveal his innermost feelings. Because Hamlet's flesh will not melt, he wishes God had not "fixed his canon [law] 'gainst self-slaughter [suicide]!"
Hamlet: Act I, III-V
The next scene—Act I, scene iii—takes place on the same day in Polonius's quarters of Elsinore Castle. This scene is full of "advice." Laertes gives advice to Ophelia. Polonius gives advice to both of his children.
In order to help you understand the text better, look up the following words and keep them in mind while reading Act I, scene iii: libertine, precept, and beguile.
This third scene of Act I gives a clearer insight into the character of Polonius. Although his pompous performance is humorous, the effect of his actions is far from funny. Polonius is a conceited, foolish, old man.
In the next scene, Act I, scene iv, Hamlet, Horatio, and Marcellus meet at midnight at the guard's post to wait for the Ghost. In scene v, the Ghost leads Hamlet away from the others and speaks to him.
In scene iv, Hamlet sees the Ghost for the first time and expresses his uncertainty about whether it is a good or evil spirit. It may be a "spirit of health or goblin damned," it may bring "airs from heaven or blasts from hells," its intents may be "wicked or charitable." Hamlet, however, does follow the Ghost. When they are alone, in scene v, the Ghost reveals that it is the true spirit of Hamlet's father. Hamlet is eager to believe the Ghost but is troubled by a suspicion that it may have come from hell.
The first four scenes of Act I of Hamlet provide the exposition, or introductory material, of the tragedy. These scenes give the time and place of the action, introduce characters and situations, set the tone, and give other information necessary to an understanding of the play.
Scene v contains the exciting force, the incident that starts the conflict, or struggle, between two opposing interests. In scene v, the exciting force is the Ghost's appearance to Hamlet. Hereafter, Hamlet and Claudius will be in direct opposition. The major conflict of this tragedy, therefore, is set into action.
Hamlet: Act II
The second act develops the major conflict between Hamlet and Claudius as well as other minor, but important, conflicts. The first scene of this act provides an even better insight into Polonius's personality.
In order to help you understand the text better, look up the following word and keep it in mind while reading Act II, scene i: doublet.
In this first scene of Act II, Polonius tells Reynaldo to speak with other Danes at the University in Paris and to imply that Laertes is leading a wild and immoral life. When Reynaldo says that this implication will discredit Laertes, Polonius explains that this method is an excellent way to find out the truth. The Danes' agreement or disagreement with Reynaldo's suggestions about Laertes will be an indication of Laertes' actual behavior. Polonius sums up this method when he says, "And thus do we of wisdom and of reach... By indirections find directions out."
At the opening of the next scene (Act II, scene ii) both Claudius and Gertrude, for differing reasons, express concern over the "too-much-changed" Hamlet.
In order to help you understand the text better, look up the following words and keep them in mind while reading Act II, scene ii: glean, expostulate, arras, and visage.
Polonius's speeches to the King and Queen in the scene you just read reveal more of his pompousness and conceit. He says that "brevity is the soul of wit [wisdom]," but he goes on to be anything but brief. He seemingly likes to hear himself talk. The Queen, who is truly disturbed by Hamlet's peculiar behavior, interrupts Polonius and asks for "more matter [information] with less art [fancy talk]."
This scene offers evidence that Hamlet is not actually mad. He is, instead, putting on the "antic" disposition he spoke of in Act I.
When Polonius is alone with Hamlet, he tries to speak with him, but only becomes more convinced of Hamlet's mental imbalance and his own (Polonius's) explanation of it. Actually, everything Hamlet says to Polonius is an intentional and satirical ridicule of him. Even Polonius admits, "Though this be madness, yet there is method [sense; logic] in't." When Polonius exits, Hamlet exclaims "These tedious old fools."
Hamlet's conversation with Rosencrantz and Guildenstern, likewise, indicates that he is in control of what he says and does. In fact, Hamlet, although he does not trust them, tells them. "I am but north-north-west. When the wind is southerly I know a hawk from a handsaw." These lines are usually interpreted to mean that Hamlet is only mad when he wants to be.
Act II provides most of the rising action of the tragedy. In the rising action, the hero usually gains ascendancy or domination, over his opponent. In Act II, Scene ii, Hamlet is in ascendancy because he knows the King is a murderer and could expose him. On the other hand, Claudius, though cautious, is not certain of what or how much Hamlet knows.
Conflict, as stated earlier, is the struggle or tension between two forces. One of those forces is usually human or, if it is an animal, thing, or abstraction, is treated as a person. The opposing force may be another person, nature, society, or the person himself. If the opposing force is the character himself, the conflict is internal. If the opposing force is outside the character, it is external.
Seldom does a tragedy build on only one conflict. Certainly, Shakespeare relied on more than one. His insight into human nature could not allow him to do otherwise. Life, as most people live it, is a complex of interacting conflicts. Thus, Shakespearean tragedies interweave several conflicts throughout, each in some way related to the major one.
Hamlet: Act III
this act, the tension between Hamlet and Claudius will reach a turning point. As scene i opens, Claudius reveals his uneasiness about Hamlet and his actions.
In this first scene of Act III, Claudius admits his guilt to the audience. When Polonius prepares Ophelia for her meeting with Hamlet, he tells her to pretend to be reading a book of devotions. Polonius then adds, "We are oft to blame in this... that with devotion's visage [appearance] / And pious action we do sugar o'er [hide] / the devil himself." Claudius, in an aside, exclaims, "O, 'tis too true! / How smart a lash that speech doth give my conscience!" The King then briefly speaks of the ugliness of his "deed," and concludes, "O heavy burden!"
Hamlet delivers his third soliloquy in this scene. In the first soliloquy, he was depressed and laments his religious opposition to "self-slaughter." In the second one, he determinedly decided to take action that would lead to the revenge of his father's death. In this third soliloquy, Hamlet is once again melancholy. He begins, "To be, or not to be, that is the question... " and goes on to contemplate seriously the possibility of suicide. He cannot, however, shake his conscience. He knows suicide is against his religion, so he cannot end his life. At the end of this soliloquy, Hamlet says, " ...the native here of resolution / Is sicklied o'er with the pale cast of thought." Perhaps Hamlet is trying to find one more excuse to explain why he has not yet obeyed the Ghost's command to take revenge. If so, his excuse this time would be that he thinks too much and never gets around to taking action.
In his conversation with Ophelia, Hamlet seemingly plays the part of the madman again. He insults the innocent Ophelia and verbally attacks women in general. Obviously, he cannot forget nor, seemingly, forgive his mother for her unfaithfulness to his father. In Act I, he had declared, "Frailty, thy name is woman!" He still believes that. Shakespeare does not indicate whether Hamlet knows that Claudius and Polonius are eavesdropping on his conversation with Ophelia.
In the next scene (Act III, scene ii), the play is presented and proves to be "the thing" wherein Hamlet catches the "conscience of the King."
reveals more of Hamlet's attitudes toward his friends. Early in the scene, he praises Horatio for his even temperament. Unlike Hamlet, Horatio can accept "Fortune's buffets and rewards" with emotional balance and self-control. Hamlet admires him for this even temperament. Hamlet shows complete trust in Horatio by telling him of his father's murder and by asking him to watch the King during the play. Hamlet expresses contempt for Rosencrantz and Guildenstern, his so-called friends. He knows that they are loyal to Claudius and not to him. After they ask him, once again, the cause of his "distemper," Hamlet tries to make Guildenstern play the recorder, a flute-like instrument. When Guildenstern insists that he cannot play. Hamlet asks him why he tries to play upon Hamlet and sound him out (find out his secrets) when he (Guildenstern) cannot even play the simple recorder.
In the next scene (Act III, scene iii), Hamlet finally gets the chance he has been waiting for, but he does not use it.
Although not all agree, a significant number of drama critics believe that the climax of Hamlet occurs in the scene you just read. The climax is the turning point of the play. Prior to that point, the rising action has developed the conflicts (and subsequent suspense) between the hero and his opposing force or forces. During the rising action, the hero usually has the upper hand over the major opponent; he is in ascendancy. At the climax, however, the roles reverse. At the climax, the character who has been in ascendancy (again, usually the hero), gives or loses his control and domination to the opposing force. From then on, the action no longer rises but falls to the inevitable final disaster.
The climax of Hamlet, according to many critics, occurs when Hamlet, the hero, does not kill Claudius. Hamlet fails to take advantage of the opportunity to accomplish that which has been his goal since he saw the Ghost at the end of Act I. Now, because of the play that Hamlet had arranged, Claudius knows that Hamlet knows too much. Claudius will take definite action against Hamlet. Hamlet has lost the control to his opponent, Claudius.
In the next scene (Act III, scene iv), Polonius, the court busybody, performs his last act of spying.
begins the falling action of the tragedy. The falling action follows the climax, portrays the various stages in the hero's downfall, and emphasizes the activities of his opposing forces. Although the falling action still creates some suspense, its main purpose is to lead logically to the inevitable final disaster. The falling action is usually shorter than the rising action. This downward movement is started either by the climax itself or by a closely following and related event called the tragic force. In Hamlet, the stabbing of Polonius is often considered to be the tragic force. It occurs almost immediately after the climax (Hamlet's failure to use the opportunity to kill Claudius). Hamlet believes that he has stabbed Claudius, but that thought is a mistake and the first of several misfortunes that culminate in the final disaster. By killing Polonius, Hamlet provides Claudius with an excuse to get rid of Hamlet.
Hamlet: Act IV
This act begins with a series of four short scenes that continue the falling action of the play. The first three take place within the castle and concern the aftermath of Polonius's death; the fourth is on an open field and concerns Fortinbras's passage through Denmark.
Polonius's death frightens Claudius. The King, who speaks in the plural we and us, says. "It had been so with us, had we been there." He knows that Hamlet had thought the person he was stabbing was the King and fears that Hamlet will make no mistake in a future attempt at murder. Claudius also knows that he will have to account for Hamlet's actions and Polonius's death. Claudius must put the blame on Hamlet and must free himself permanently of Hamlet. His plan, therefore, is to tell Hamlet that, for his own safety, he must leave immediately for England with Rosencrantz and Guildenstern as escorts. The letters, however, that the escorts will carry ask the king of England to have Hamlet killed (presumably on the grounds that he is a threat to the safety of England). Hamlet agrees to go but of course, does not believe Claudius's expressions of concern and goodwill.
The next scene (Act IV, scene v) continues the falling action and focuses attention on Polonius's children.
Claudius finds himself with a lot of explaining to do. Because Claudius buried Polonius secretly and unceremoniously, the Danish people are suspicious about the death and want Laertes to be king. Laertes blames Claudius for Polonius's death and is determined to take revenge.
The next two scenes (Act IV, scenes vi and vii) continue the falling action with the death of one more person and plans for the murder of another.
Ophelia seems to have been an innocent victim of the poor judgment and mistakes of the three men she loved. Her brother insisted that Hamlet did not love her, and that, even if he did, he would never marry her. Her father forbade her, without any previous consideration of the matter, to see Hamlet, the man she loves. Hamlet speaks viciously to her on occasion, rejects her, and unintentionally kills her father. All three men have contributed to the circumstances that led to her madness and death.
Hamlet: Act V
The final act opens with two "clowns," a word which in Shakespeare's day could mean a peasant or a rustic. These two men are gravediggers who are preparing Ophelia's grave in the churchyard. Their conversations provide a few moments of relief in the tension of the falling action.
Comic relief often is inserted into the falling action of a tragedy. Comic relief is a humorous scene, incident, or speech intended to provide emotional relaxation for the audience and, at the same time, to heighten the seriousness of the story. Those two purposes may seem contradictory, but they are not. The grave diggers' joking, punning, riddle-telling, and responding to Hamlet provide a few light moments between the previous scene (in which Ophelia dies and Hamlet's death is planned) and the scene to follow (in which the play ends with many deaths). The subject of the verbal fun, however, is usually death, a serious subject and one that is appropriate for the last act of this play.
The next scene (Act V, scene ii) is the final scene of Hamlet. As it opens, Hamlet is telling Horatio that, while traveling to England with Rosencrantz and Guildenstern, he opened the "grand commission" (the sealed letter to the King of England) and discovered that it contained orders for Hamlet to be beheaded.
This final scene is the catastrophe, the conclusion, of the play. The catastrophe of a tragedy presents the final disaster that usually is the death of the hero and frequently that of his opponents as well. The final failure, or disaster, at least in retrospect, is neither a shock nor a surprise ending. It is, rather, a logical result of the previous activity in the tragedy.
Sometimes the last part of the falling action offers an event that postpones the catastrophe and seemingly provides an escape for the hero. This delay is called the "moment of final suspense" and helps maintain audience interest until the end. The delay, however, is only a "moment," the tragedy must continue the downward motion to the inevitable, tragic conclusion.
Hamlet: An Overview
Plot may be simply defined as the "planned pattern of events in a narrative." This definition implies three things:
Only a narrative, or story-telling literature, has a plot.
The events, or happenings, do not "just happen," but are planned and predetermined by the author.
The events are related to each other and fit together to form a logical sequence of action.
As you have already seen, the structure of the plot of Hamlet, as well as that of most genuine tragedies, may be divided into five parts: exposition, rising action, climax, falling action, and catastrophe. Each part develops or otherwise relates to some phase of conflict.
The specific conflicts between Hamlet and
Claudius and between Hamlet and himself are often interpreted as part of a larger conflict between man and the forces beyond his control. These forces may be called fate, fortune, change, accident, or even providence. Hamlet's description of his journey to England offers evidence of the role such forces play in determining his future. Unable to sleep one night aboard the ship, Hamlet groped about "in the dark" and found the letter requesting his death. By chance, he was carrying his father's signet, so he could put the royal seal on the new letter he wrote. Fortunately, Hamlet only became a prisoner of the pirates, men who normally were not noted for their kindness. Hamlet's pirates, however, dealt with him "like thieves of mercy" in exchange for a favor he would do for them.
Hamlet first suggests his awareness of the significance of forces beyond his control in Act I, Scene 4, when he declares that, by chance, some people are born with a "vicious mole of nature" (natural fault). These people, he comments, are "not guilty," because a person has no control over his origin.
The gravediggers also demonstrate an acknowledgment of fate. Their job is not necessarily a pleasant nor enviable one; however, they are aware that within the social structure of their day people like themselves may not be able to find another job. The grave diggers, therefore, accept their job and, of course, death itself, and joke about it. In essence, they accept their fate.
By the last scene of the play, Hamlet also accepts the fact that he alone does not determine or control his destiny. He indicates this in a now famous quote: "There's a divinity which shapes our ends..."
One of the characteristics of a Shakespearean tragedy is that often a major character in the tragedy is motivated by revenge. Although the idea of revenge is unacceptable in the twentieth-century, it was considered a necessary way of life in early medieval times. Without the benefits of trial by jury, the earlier medieval societies established their own codes to ensure a type of justice. Avenging the death of a family member was not only permitted but was, in fact, a duty.
What difficulties of the seventeenth and eighteenth centuries might seem familiar to us today? Political corruption and struggles for power were even more common than now. Wars were being waged, often for economic purposes, just as may happen in modern times. Cities were becoming industrialized, and the displaced poor were flocking to those cities to find work—and to live in slums, just as many inner-city people do today. Then as now, trade was flourishing, but so were the corrupting attitudes that often accompany wealth. Much of the newly educated reading public lacked a knowledge and appreciation of Greek and Roman literature, which encouraged the publication of rapidly written periodicals. Many people today prefer reading the latest celebrity magazines and papers than the "good" literature of our day. Newly built smoke stacks of industry were beginning to produce black clouds of pollution. Our world still struggles with the problems of pollution. Changes were happening so rapidly that many people felt the same fear of the future that many people feel today. In short, more people were gaining more power and often were not certain what to do with their newly acquired political and economic strength.
The writers of the best literature of those two centuries were involved in their times. They did not withdraw from their responsibilities. They wrote poetry, essays, and longer works specifically to inform the public of the changes taking place and to persuade it to do something about those changes. John Milton wrote essays to support the actions of the Puritan government. He wrote fewer political works after the king's restoration. Milton also used his work to explore the duality of good and evil in the world. Both Paradise Lost and Areopagitica are concerned with this theme. Writing somewhat later, Jonathan Swift chose satire to belittle individuals and practices that represented to him political, moral, and cultural decay. He had been actively involved in his political party's government, but was removed from that position by the opposition. Finally, Oliver Goldsmith satirized the greed and foolish political and personal practices of his day, but he also described sympathetically the unfortunate results of the agricultural and industrial revolutions taking place. Since these writers had studied classical literature and all had admired its organization and clarity, they desired to write literature that was logically organized and convincingly presented with carefully chosen words. They desired to create beautiful works of art, to please as well as to inform.
The Commonwealth and Earlier
for Chart 1—historical overview
Commonwealth is the term used to describe the Puritans' control of English government from 1649 until 1660. To understand how the Puritans became powerful enough to gain control of England, you must first understand who the Puritans were. The term Puritan was probably first applied during Elizabethan times to those men, mostly craftsmen and citizens of the flourishing bourgeois group, who believed that the Church of England should be "purified" of unnecessary ritual that was no longer meaningful and of organization that was no longer able to reach individual members. These dissenters resented their government's imposing on them what they considered a corrupt faith. Parish priests of the Church of England were awarded their positions by being the owner of the most land in the area. The clergyman's payment came out of parish tax funds and, once established, was automatic. Once a vicar was given a parish, he almost always kept that parish. The overseeing bishops were appointed by the monarch. Thus, by the time of Elizabeth's successor, James I (see Chart 2), seemingly no division existed between church and state. Tax money supported the church, and the king governed it.
for Chart 2—line of succession
Anglicans, members of the Church of England, feared these Puritans and other dissenters, or Nonconformists, because they rebelled not only against the church but also against the state since church and state were so closely related. Fearful Anglicans made laws to enforce conformity to the Church of England. One such law was responsible for John Bunyan's stay in Bedford jail, influencing his work Pilgrim's Progress. These laws forced Puritans further away from the party of the king.
James I himself widened that division by insisting on his absolute power as king over the powers of Parliament, which contained several Puritan members. James wished to ally England with Roman Catholic Spain, a wish that further angered the Puritans, who felt that the Roman Catholic Church was idolatrous and went against their wishes to purify the church of unnecessary rituals. His son, Charles I, was so eager to control England without Parliament that no Parliament was convened from 1629 to 1640 (see Chart 1). Moreover, Charles clearly preferred Roman Catholic ritual and began to restore it to the English Church. This period was so difficult for the Puritans that nearly twenty thousand emigrated to America. In 1640, when the newly convened Parliament refused to give Charles money to quiet unrest in Scotland, the stage was set for the Civil War, which began in 1642, between the king's forces (sometimes called Cavaliers or Royalists) and the Puritans (also called Roundheads).
Puritans felt justified in defying the king because they disapproved of his desire to insert politics into religion.
In 1645, the Puritans won the Civil War. In 1649, after some Puritan maneuvering in Parliament, Charles I was executed. Thus, in 1649, the Commonwealth began its eleven-year existence. During this period, Parliament was the ruling body until 1653, when the Puritan leader of the Parliamentary forces, Oliver Cromwell, was declared Lord Protector. Oliver Cromwell died in 1658. His son could not prevent an invitation to Charles II to return to England as king. By this time, most English citizens had become tired of the Puritan government's suppressive actions, which included closing theaters by Parliamentary act from 1642 to 1660, beheading the Archbishop of Canterbury, and evicting Anglican clergymen from their parishes. The English were eager to celebrate Charles II's return. Thus in 1660, Charles II was made king, and the English monarchy was restored.
The Restoration of Charles II
The Restoration did not altogether quiet the discontent that had led to civil war. Anglicans still feared Puritan influence; and Puritans, as well as many Anglicans, feared renewed Roman Catholic pressure from the monarchy. Less important uprisings occurred in 1678, 1685, and finally, in 1688. Even though Charles II's Act of Grace had pardoned those Puritans not directly responsible for Charles I's death, nearly two thousand clergy with Puritan leanings left the Church of England in 1661. By 1672, the Test Act forced all officers of the state, both civil and military, to prove their sympathies by taking communion according to the form of the Church of England. Charles I's Roman Catholic preferences had so frightened the English that they readily believed Titus Oates (1649-1705) who invented a "Popish Plot." According to Oates, in the Popish Plot, Roman Catholics were supposed to have planned to assassinate Charles II and other political leaders so that they could place his brother James II (a strong Roman Catholic) on the throne. Memories and resentments of previous Roman Catholic injustices were still fresh: Queen "Bloody" Mary I, daughter of Henry VIII, had burned Protestants at the stake only a century earlier; and the Roman Catholic-inspired Gunpowder Plot (when Guy Fawkes was prepared to blow up the king and Parliament) had happened in 1605. Once again this fear, based on the imaginary "Popish Plot," renewed violence; some thirty-five people were executed for supposed treason.
When James II took the throne in 1685 at his brother's death, he confirmed some of those fears. In 1688, he imprisoned seven bishops of the Church of England in the London Tower. When his second wife bore a son, many feared the obvious Roman Catholic heir to the throne.
Fortunately, English Protestants found a solution without the execution of another king. James II's daughter Mary, who was heiress to the throne, had been contracted to marry William of Orange of Protestant Holland. William was quickly invited to England to insure Protestantism in 1688. This turn of events caused James and many of his followers, known as Jacobites, to flee to France. William and Mary's acceptance of the throne was known as "The Glorious Revolution." At that time, Parliament was given the power to determine the succession to the throne. That "revolution" provided for political and religious toleration, and thus brought government reform agreeable to the English majority.
Glorious Revolution to Post-1750s
When William and Mary were invited to England, Parliament became more powerful. Two political parties, the Tories and Whigs, emerged to struggle for control of Parliament during William's reign. The Tories' ancestors were, supposedly, the Royalists of the earlier seventeenth century. The Whigs' ancestors had been anti-Royalist. The Tories supported the present order of the church and state and were mainly landowners and lower-level clergymen. Whigs usually supported commerce, religious tolerance, and Parliamentary reform. These parties, however, were hardly like today's parties; they were more like groups of politicians allied to promote common interests.
William III reigned jointly with Mary II until 1694 (when Mary died of smallpox) and as sole ruler until 1702. His reign was marked by military matters, a characteristic the Tories were quick to criticize. He quieted Jacobite uprisings in Scotland, subdued Ireland, and conducted a continental war against France to stop her influence and control. William was not popular with the Tories because of his connections with Holland. The Dutch were seen by the English as money-grabbing merchants. The Tory Jonathan Swift satirized the Dutch in "Book Three" of Gulliver's Travels by portraying their merchants stomping on a crucifix to persuade the Japanese to trade with them.
William was killed by a fall from his horse in 1702. Anne, William's sister-in-law, became Queen until 1714. The Whigs remained in power and continued military activities to boost the economy. The Tories continued to complain until 1710 when they came into power. The Tories finally calmed the war with France. Jonathan Swift became their chief propagandist. These years, however, were not calm.
Roman Catholics were still feared in spite of the Toleration Act of 1689, which permitted Protestant dissenters to hold their own services instead of attending those of the Church of England. After Anne's death in 1714, the crown went to George I of Hanover, a small kingdom that later became part of Germany (see Chart 2). The Hanover kings, who ruled until 1820, were criticized for their preference for the German language over English, their preference in music and unimportant scholarly matters, and their controversial personal lives. Yet they did bring stability to the throne while tremendous social and economic changes swept the country.
1750s and Following
The 1750s began a period of rapid changes brought on by industrialization, shifting social classes, and continuing expansion of the British Empire. One such series of changes has sometimes been called the "agricultural revolution," although that title is probably an overstatement. It was caused by landowners who were still suffering financially from the civil war. They decided to reorganize their land and buy more land to make their farming more efficient. They then enclosed the land for their own use, a move given the title of "Enclosure Acts," and consequently prevented small farmers and squatters from using the land that had once supported them. These landowners began to develop better farming methods, such as the rotation of crops and the draining of marshes, and invented improved farm machinery, but in so doing displaced many of the rural poor.
Along with farming improvements came improved spinning and mining methods. Finally, by the 1750s, spinning and weaving machinery powered by steam began what is known as the Industrial Revolution. Inventions developed rapidly to produce goods more quickly and in greater volume.
Some of those rural poor who had been driven from their land began to cluster in newly industrialized areas to find employment. Their living conditions eventually became so intolerable that Parliament later enacted reform bills to feed and educate these groups. The Anglican Church further eroded as some members realized how the church's complicated structure prevented it from reaching the masses of poor people. The Anglican clergyman John Wesley and his followers broke away from the Anglicans to form the Methodist Church.
Growing industries at home and trade to other parts of the expanding British Empire produced higher-level jobs and a growing middle class. Old, established families were losing money and power, while families with unrecognized reputations began to acquire the wealth necessary to have political power. As money became more important, a classical university education became less important. Education was thinly spread at lower levels to produce a wider, but less educated, reading public, and periodicals, which could be read quickly and easily, were becoming more popular.
Meanwhile, England became more committed to commercial and political expansion. With the Peace of Paris at the end of the Seven Years' War in 1763, England gained the two subcontinents of Canada and India. It had given much money to protect the Americans from the French and to promote western expansion in America. The British were truly unable to understand why the Americans seemed unwilling to aid the British taxpayers. Jonathan Swift, Alexander Pope, Samuel Johnson, and Oliver Goldsmith warned of the consequences of the greed, corruption, and violence that plagued this period in British history.
The literature of these centuries was politically conscious; major writers were deeply committed to making their readers understand the significance of current events. The two Puritans John Milton and John Bunyan had been active in the Commonwealth. Milton's epic poem Paradise Lost and Bunyan's allegory Pilgrim's Progress do not deal directly with political themes, but they emphasize the battle between good and evil in all human beings. They contrast with the literature written to entertain Charles II's court, literature that shows a renewed influence from France: witty and sparkling satire, carefully structured drama, and themes sometimes lacking moral values.
Writers who lived in political, economic, and social disorder were concerned with imposing order and organization on their writing. The period from 1660 to 1700 is sometimes called the Neoclassical period because writers, especially poets, used their knowledge of Greek and Latin literature to perfect literary forms. One such perfected form is the heroic couplet, which you will examine later. Most important, writers were concerned with placing man in an orderly world in which he knew his position and observed the rest of the world with educated but restrained criticism.
Writers, especially from 1688 to 1745 (sometimes called the period of common sense), felt a public responsibility to evaluate the quality of life, just as their classical models had. Along with this critical responsibility, they stressed the importance of a reasonable, logical approach. Realism was important in describing man's actions and his social position. Finally, a controlled approach to religion was important. They distrusted emotional shows of faith and revelations that would not stem from intellectual examination. They believed that the religious experience was rational and must be observed by the intellect. These four characteristics all appear in the works of Alexander Pope and Jonathan Swift, both of whom used satire as a weapon against, and as instruction for, the newly educated masses.
Writers from 1745 to the end of the century became more sentimental and even more moral. Their literature is sometimes called the literature of sensibility. These writers wrote lyrical emotional works with emphasis on the common man or on times in the distant past. They were interested in supernatural elements (usually to instruct and prepare the soul for death), and in the beauties of a higher power in nature. They often probed the effects of melancholy.
Finally, writers found new ways to reach the public. They wrote moral or satiric essays in periodicals, such as The Tatler (1709), Spectator (1711), and The Gentleman's Magazine (1731). They also developed a new literary form, the novel, describing middle class people dealing with middle class problems. At that time, a novel was mainly a fictitious narrative, a story having no factual basis, with a closely knit plot of epic scope and a unity of theme. John Bunyan and Daniel Defoe, the author of Robinson Crusoe, pioneered realistic detail and lengthened narratives. Samuel Richardson (1689-1761), Henry Fielding (1707-1754), Tobias Smollet (1721-1771), and Laurence Sterne (1713-1768) are the important novelists of the period. Their novels are still delightful to read and have influenced countless novelists since then, including Charles Dickens.
17th Century Puritan Literature: Milton
Many critics of literature rank the greatness of John Milton's second only to Shakespeare. Milton probed the religious and political issues with a wise and serious honesty that had developed from studying literature of the past, contemporary issues, and personal trials.
Milton was born December 9, 1608, the son of a prosperous Puritan lawyer and amateur composer. From 1615 to 1625, he was educated privately at St. Paul's School. One biographer tells us that Milton was writing poetry at the age of ten. Like many men of greatness, his lofty conception of his destiny was apparent from his youth, when he would brag to friends that he was to be the next great English poet. In 1629, he received his Bachelor of Arts degree from Cambridge. In that same year, just after his twenty-first birthday, he wrote his first major poem: "On the Morning of Christ's Nativity," which he claimed to have actually written on a Christmas morning. Two years later, in 1631, he wrote two more masterpieces, the companion poems L'Allegro and Il Penseroso, which present contrasting ways of living: the active, social life ("mirth") and the contemplative life ("melancholy"). In 1632, Milton received his Master of Arts degree. Then, he spent the next six years of his life in intense self-study, as he felt the rigors of Cambridge to be inadequate. In 1634, Milton's drama Comus was presented. Virtue is the theme of this drama. Milton emphasized that virtue is sincere only when it has been tested and found pure. The idea that virtue has no meaning unless it is chosen over evil or vice is a common theme of Milton's.
The year 1637 began a difficult period in which Milton's own faith was tested. Both his mother and his close friend since college, Edward King, died. Milton wrote the pastoral elegy Lycidas to probe the pain, injustices, and uncertainties of life. In that poem he handled a number of poetic styles, condemned insincere clerics, questioned his own future as well as that of the soul of his friend, and worked with many classical allusions.
Milton's sonnet "On His Blindness."
Most scholars agree that this poem, one of his 23 sonnets, was written in 1652 when Milton became entirely blind. At that time, he was only 43 years old and was still active in the Commonwealth. Milton's eyes had been weak since childhood. About 1644, his sight began to fail noticeably. In early 1650, he nearly lost the sight of one eye. Although warned about the effect of continued work on his eyesight, he did not stop working. Milton became completely blind in the winter of 1651-1652.
This sonnet in substance and tone expresses Milton's first reaction to total blindness.
. Milton: Paradise Lost: Part I
Paradise Lost is not only Milton's greatest work, but probably one of the greatest achievements in English literature. Written while Milton was blind, the poem tells of Milton's concept of the Biblical story of man's happiness in the Garden of Eden and of his first disobedience. Within the story, the angel Raphael tells the hero of the story, Adam, of the history of Satan and his rebellious angels. The poem ends with Adam and Eve's expulsion from Paradise and a vision of the future hope of a redeemer who will save man. This idea is borrowed from the book of Genesis in the Bible and was turned into an epic poem by Milton.
You have already learned that an epic is a long narrative poem written in an elevated style, having a central heroic figure, with episodes important to the history of a nation or race. The central figure is Adam, who represents mankind. The setting is vast indeed, covering heaven, earth, and hell. Satan is a superhuman version of the heroes of classical epics. He is in total contrast with Milton's heroes, Jesus and Adam.
The poem is written in blank verse, verse written in iambic pentameter without rhyme. It contains epic characteristics or conventions:
It opens by stating an epic theme.
It has several invocations to Muses.
It opens in the middle of the action.
It has catalogs of warriors.
It contains many epic similes, or stated comparisons.
The epic is quite long, forming three major parts, each consisting of four parts. In Books I through IV, the characters, settings and major conflicts are introduced. Books I and II take place in Hell, where Satan and his forces determine the next course of action now that they have been driven there by God. Satan decides to corrupt mankind. In Book III, which takes place in Heaven, we learn that Adam, who has a free will, will choose to disobey God and that Jesus volunteers to sacrifice himself. Book IV introduces the human characters, Adam and Eve, who live an idyllic life in the Garden of Eden. Their only prohibition is not to eat of the tree of knowledge.
The middle books (Books V-VIII) deal with the angel Raphael warning Adam about Satan. These books are sometimes referred to as "the education of Adam" because Raphael gives Adam a history lesson of Satan's fall and man's creation.
The final section (Books IX-XII) covers Adam and Eve's disobedience and restoration. The angel Michael tells Adam of the miseries mankind will have in the future. As Michael shows the future to Adam, he emphasizes that "one just man" prevents God from destroying mankind. Being a Puritan, Milton would have studied the idea of redemption in the Bible quite frequently. These studies show up in this portion of the epic.
17th-Century Puritan Literature: Bunyan
John Bunyan was born in 1628, the son of a tinker (a mender of household utensils). He later became a tinker himself. At sixteen, he was drafted into the Parliamentary army and served from November 1644 to June 1647. Afterwards, like most writers of this time, he began to study the Bible, which influenced his writing style.
In 1653, he became a preacher and published a tract against jeering Quakers, entitled Some Gospel Truths Opened, in 1656. At the Restoration of Charles II, Bunyan was found guilty of disobeying the Conventical Act, which forbade nonconformists to preach or publish, and was imprisoned for twelve years, until 1672. Thus, most of his writing was done in prison. He was given opportunities to be released from jail, but lost them because he refused to promise to discontinue preaching.
In 1666, he wrote Grace Abounding to the Chief of Sinners, writing about his own intense religious struggles. The Holy City (written in 1665) and Pilgrim's Progress were also Biblically influenced. In 1672, he wrote another account of his own faith, A Confession of My Faith and a Reason of My Practice.
Three years after he was released from prison, he was imprisoned for six more months during which time he wrote the first part of Pilgrim's Progress, which was published later in 1678. He published The Life and Death of Mr. Badman in 1680, an allegory about a fashionable man who becomes a hypocritical Christian. Several critics find this work anticipates Defoe in its use of realistic details. The Holy War, published in 1682, tells of Bunyan's military experiences. It is a social allegory about Mansoul, a town that needs to be "saved." Much detail is given to the politics of leaders. Bunyan described the perceived evils of his own time, especially the treatment of Nonconformists, which caused his prison sentence. In 1684, the second part of Pilgrim's Progress appeared, describing the fate of the main character's wife and family. Bunyan died in 1688.
Pilgrim's Progress. Bunyan's greatest allegory has become a popular classic in English literature; it was widely read in Puritan New England as well as in England. It is in the form of an allegorical journey from the City of Destruction to the Celestial City, and it explains the doctrine of Christian salvation in detail, which was a popular ideal of the time. It has interested readers as well as taught them because the symbolic landscapes and characters are also realistically described. The main character, Christian, is both a universal pilgrim and a poor, nervous man from Bedfordshire. Because the reader sympathizes with Christian, he is eager to go with him, to solve each segment of the allegory. The allegory is also skillfully told; suspense continues from episode to episode. Bunyan's style is simple, direct, and lacks difficult classical references found in other works. In addition, Bunyan included social satire in his section "Vanity Fair." The poor pilgrims chained by the merchants closely parallel the Puritans imprisoned after the Restoration.
You should understand what an allegory is and why it is used. An allegory is a form of comparison lengthened into a story. Objects, persons, and actions in this story represent general concepts that lie outside the story and are parts of the doctrine or theme being presented. The characters in an allegory are usually personifications of abstract qualities such as Hope and Faith. The action and setting of an allegory are representative of relationships among these abstractions. Thus, the reader is interested in the literal story and characters presented, but he is also aware of the ideas behind the story. In Pilgrim's Progress, the character, Christian, makes an actual journey. He flees from the City of Destruction; struggles through such places as the Slough of Despond, the Valley of the Shadow of Death, and Vanity Fair; and finally arrives at the Celestial City. Christian meets actual characters named Evangelist, Faithful, Hopeful, and Giant Despair. This story, however, represents the efforts of one man to save his soul by triumphing over inner obstacles to his faith. Allegory was a popular technique in Bunyan's time because many writers, schooled in the Bible, used their writing to explore their faith.
Bunyan's symbols, things or actions that represent something else, are logical and are realistically described. Bunyan used character and place names that quickly explain what they represent. Thus, Christian meets a devil who tries to discourage him in a very logical place, the Valley of Humiliation. Giant Despair, who tries to convince Christian to commit suicide, lives at Doubting Castle.
Bunyan's symbolic use of objects is usually obvious; for example, Christian, who has become mentally burdened by worries about his destruction, is symbolically weighted down by a burden on his back. Yet these symbolic characters, places, and objects are not entirely one-dimensional; they are described with realistic details that are interesting in themselves. Christian is so weak and nervous that he allows his family to belittle him and honestly tells Evangelist that he does not see the wicket-gate in the distance and is not even sure he sees the light. Giant Despair loses his temper easily and is nagged by a meddling wife named Diffidence.
Finally, Bunyan knew how to tell a good story. He dropped clues and repeated events with interesting variety to build suspense. As Christian asks himself once again, "What shall I do?" or waits for Giant Despair to come back once more, the reader is so interested that he truly wants to see what happens.
Literature of Common Sense: Pope
Born in 1688, Alexander Pope had to overcome two serious personal problems. He was born a Roman Catholic during a time when Catholics were discriminated against; and at twelve, he was afflicted with tuberculosis of the spine, which left him physically unattractive and weak. He nevertheless became perhaps the greatest poet between Milton and Wordsworth and a close friend of Jonathan Swift, an author you will study in more detail. His famous poems, including Essay on Criticism, The Rape of the Lock, Windsor Forest, Eloisa to Abelard, The Dunciad, and Imitations of Horace, made him one of the first professional poets, that is, a poet who makes his living by writing poetry. His classical learning brought additional income; he published translations both of the Illiad and the Odyssey. He died in 1744.
You should be familiar with Pope's important contributions to literature and to the society of his time. Both he and Jonathan Swift were members of a small writers' organization called the Scriblerus Club, which was formed specifically to satirize what they felt to be the foolish and vain studies and hobbies of their age. The members of this club wrote poetry, essays, long fictional works, and plays to convey their message. Their works entertain because they make their subjects ridiculous, but they also teach indirectly because they intimate that other practices are better than the ones being discouraged through ridicule.
This ridiculing effect is achieved by several methods. An author may magnify and exaggerate the corruption of the object being satirized. He may also position the satirized object next to something very undignified so that there is "guilt by association"; a person compared to a dog is made as foolish as a dog. Another widely used method is irony, the appearance of saying or being one thing while making it clear to the audience that something different is meant. Thus, although the simple-minded Gulliver in Swift's Gulliver's Travels admires what he sees, the reader understands that he is not supposed to admire the same thing. Oftentimes a praise-blame inversion is used: The author seems to praise something, but the reader realizes that he is actually condemning it.
Alexander Pope used all these methods skillfully in his two most famous satires: The Rape of the Lock and his longer mock epic The Dunciad. In The Rape of the Lock, Pope satirized polite society by representing a disagreement at a card game among young ladies and their beaus as an epic battle. The disagreement (over one young man's cutting a lock of his lady's hair) was made to look insignificant and silly when given such inflated comparisons. Pope made excellent use of irony. The Dunciad records all the epic games and activities of writers and publishers as they make their epic journey through London. Its satire is bitter and angry; it concludes with Dullness sitting on her throne in darkness. Pope, as all the writers of the Scriblerus Club, was revolted by a society of those he believed to be half-wits and dullards.
Pope used an already perfected poetic form, the heroic couplet, for most of his satires; but he made that form so flexible that it is generally accepted that no other poet, with the possible exception of John Dryden, has matched Pope's skill in application of the form. The heroic couplet consists of two rhyming lines of verse with five iambic feet per line; an iambic foot, or unit of rhythm, consists of an unstressed syllable (marked with a breve) followed by a stressed syllable (marked with an accent mark).
Literature of Common Sense: Swift
Jonathan Swift was born in 1667 in Dublin, Ireland. Many of his religious and political activities and the resulting writing originated from his early Irish background. He began his satiric writing early while a secretary to Sir William Temple in England. He wrote A Tale of a Tub (published in 1704) to satirize corruption in religion and learning. The tale describes the adventures of three brothers who care for their coats in different ways. Peter, representing the Roman Catholics, adorns his coat until it cannot be recognized; Jack, representing dissenters, tears his coat by taking off Peter's decorations too quickly and carelessly; and Martyn, representing the Anglicans, saves his coat by making changes slowly and preserves it according to the instructions given in his father's will. The tale also contains many digressions satirizing critics, both ancient and modern learning, and even madness. In 1697, he wrote The Battle of the Books (published in 1704), a satire in which the Ancients (books written by Homer, Pindar, Euclid, Aristotle, and Plato) win a battle begun by the Moderns (Swift's contemporaries who exonerated themselves above the Ancients). Swift emphasized the importance of classical learning when he compared the Moderns to a spider that spins webs from its own filth and the Ancients to a bee that gets its honey by tasting from several flowers already blossoming.
Then Swift became more involved in Irish issues. From 1707 to 1709, he sought to do away with a tax on Irish clerical incomes. Later in 1724, he wrote the first four Drapier's Letters, which protest the use of low value, overabundant copper coins produced outside of Ireland without Irish permission. The letters, written to encourage the boycott of the new coins, raised the attention both of the English and the Irish. The English considered the writer of the letters dangerous and offered a reward for the arrest of the Drapier (Swift did not use his own name); Swift's printer was actually arrested. The Irish, on the other hand, considered Swift a great patriot. His writing was so persuasive that the order for the coins was canceled. In 1729, Swift again worked for Irish causes by publishing A Modest Proposal, a satire emphasizing the brutal indifference the English demonstrated toward the starving Irish. He accused the British Parliament of cruelty and satirized social mathematicians or economists of the period who saw people as commodities rather than suffering individuals.
As he was involved with Irish problems, he was also active in English political circles. From 1710 to 1714, he was in London in the midst of an exclusive group of Tories. He was, perhaps, the greatest propagandist for the Tory government. He believed that enemies of the Tories were also enemies of culture and morality, and thus turned his satire against them. He hated the unreasonable and cruel qualities of men when they join into groups to gain their own ends, but he believed that individual men can be responsible for their own behavior. He, therefore, sought to persuade men to be responsible and reasonable.
Swift suffered disappointments because of his intense involvement in political and religious issues. Even though he had wanted a bishop's office in England, he was made Dean of St. Patrick's Cathedral in Dublin; Queen Anne disapproved of some of the lower forms of satire in A Tale of a Tub and would never consent to granting him a higher office. The following year, in 1714, Queen Anne died, and Swift's former political activities with the Tories alienated him from the new Whig government.
In 1726, he stayed in England with Alexander Pope and published Gulliver's Travels. He published several Miscellanies with Pope in 1727 and 1728. In 1738, he was suffering intense pain from Meniere's syndrome, causing physical imbalance, nausea, deafness, and eventually, madness. He died in 1748 and was buried in St. Patrick's Cathedral in Dublin. His sense of critical responsibility, his wit, his use of realistic detail, and his easily read style have entertained countless readers.
His satire, Gulliver's Travels
Swift's best-known work is often introduced to children because of its realistic and exciting adventures. It is, however, a satire operating at several levels. It teaches older readers as well as entertains them; it is a political satire, a burlesque of voyage books, and a satire on the abuses of reason and vanity. Its story is told by a fictitious sailor named Gulliver.
In the first book, Captain Gulliver is shipwrecked on the coast of Lilliput, where people are only six inches tall. In this book, Gulliver is shocked by these short, but politically and morally corrupt people who often represent immoral English politicians. Gulliver feels superior morally as well as physically.
In the second book, Gulliver is in Brobdingnag, and is dwarfed by giants 60 feet tall. Here, he learns that he, too, is corrupt as he compares his former way of life to that of these practical, benevolent giants. Swift used the satiric method of associating moral corruption with physical corruption by having Gulliver examine the giants' human characteristics; Gulliver sees enlarged corruption to remind him that the human condition is also coarse and repulsive.
In the third book, Gulliver himself is less important than the travels he describes. He visits several countries where the inhabitants (scholars, scientists, philosophers, inventors, professors) appear ridiculous in their excessive reliance on reason as opposed to common sense.
In the fourth book, the naive Gulliver becomes increasingly disenchanted with his own kind. He begins to worship rational, unemotional horse-beings, the Houyhnhnms. He compares himself to the human-like but beastly Yahoos, who are held in subjection by the Houyhnhnms, and learns to hate men for not being horses. Here, Swift's satire is double-edged; the cold Houyhnhnms are imperfect, just as the Yahoos are. Gulliver is disabled by his self-hate; he becomes too critical and isolated to love even his own family.
The short passages you will read should be fitted into this larger framework. Gulliver learns to be more critical, but eventually he becomes so aware of imperfections that he is unable to feel natural human warmth toward his own kind. Swift's use of irony allows the reader to see that Gulliver's final hate has driven him mad. Swift's style is clear and easy to read, and his use of concrete details is convincing. He frequently used irony and comparisons with undignified subjects to satirize both the practices and the political and cultural figures of the period.
Gulliver's Travels—From Part One
We therefore trusted ourselves to the mercy of the waves, and in about half an hour the boat was overset by a sudden flurry from the north. What became of my companions in the boat, as well as of those who escaped on the rock, or were left in the vessel, I cannot tell; but conclude they were all lost. For my own part, I swam as fortune directed me, and was pushed forward by the wind and tide. I often let my legs drop, and could feel no bottom: but when I was almost gone and able to struggle no longer, I found myself within my depth; and by this time the storm was much abated. The declivity1 was so small, that I walked near a mile before I got to the shore, which I conjectured was about eight o'clock in the evening. I then advanced forward near half a mile, but could not discover any sign of houses or inhabitants; at least I was in so weak a condition, that I did not observe them. I was extremely tired, and with that, and the heat of the weather, and about half a pint of brandy that I drank as I left the ship, I found myself much inclined to sleep. I lay down on the grass, which was very short and soft, where I slept sounder than ever I remember to have done in my life, and, as I reckoned, above nine hours; for when I awaked, it was just daylight. I attempted to rise, but was not able to suit for as I happened to lie on my back, I found my arms and legs were strongly fastened on each side of the ground; and my hair, which was long and thick, tied down in the same manner. I likewise felt several slender ligatures2 across my body, from my armpits to my thighs. I could only look upwards, the sun began to grow hot, and the light offended my eyes. I heard a confused noise about me, but, in the posture I lay, could see nothing except the sky. In a little time I felt something alive moving on my left leg, which advancing gently forward over my breast, came almost up to my chin; when, bending my eyes downwards as much as I could, I perceived it to be a human creature not six inches high, with a bow and arrow in his hands, and a quiver at his back. In the mean time, I felt at least forty more of the same kind (as I conjectured) following the first. I was in the utmost astonishment, and roared so loud, that they all ran back in a fright; and some of them, as I was afterwards told, were hurt with the falls they got by leaping from my sides upon the ground . . .
My gentleness and good behaviour had gained so far on the Emperor and his court, and indeed upon the army and people in general, that I began to conceive hopes of getting my liberty in a short time. I took all possible methods to cultivate this favourable disposition. The natives came by degrees to be less apprehensive of any danger from me. I would sometimes lie down, and let five or six of them dance on my hand. And at last the boys and girls would venture to come and play at hide and seek in my hair. I had now made a good progress in understanding and speaking their language. The Emperor had a mind one day to entertain me with several of the country shows, wherein they exceed all nations I have known, both for dexterity and magnificence. I was diverted with none so much as that of the ropedancers, performed upon a slender white thread, extended about two foot, and twelve inches from the ground. Upon which I shall desire liberty, with the reader's patience, to enlarge a little.
This diversion is only practiced by those persons who are candidates for great employments, and high favour, at court. They are trained in this art from their youth, and are not always of noble birth, or liberal education. When a great office is vacant either by death or disgrace (which often happens) five or six of those candidates petition the Emperor to entertain his Majesty and the court with a dance on the rope, and whoever jumps the highest without falling, succeeds in the office. Very often the chief ministers themselves are commanded to show their skill, and to convince the Emperor that they have not lost their faculty. Flimnap, the Treasurer, is allowed to cut a caper on the strait rope, at least an inch higher than any other lord in the whole empire, I have seen him do the summerset several times together upon a trencher3 fixed on the rope, which is no thicker than a common packthread in England
Gulliver's Travels—From Part Two
It is the custom that every Wednesday (which, as I have before observed, was their Sabbath) the King and Queen, with the royal issue of both sexes, dine together in the apartment of his Majesty, to whom I was now become a favourite; and at these times my little chair and table were placed at his left hand before one of the salt-cellars. This prince took a pleasure in conversing with me, enquiring into the manners, religion, laws, government, and learning of Europe, wherein I gave him the best account I was able. His apprehension was so clear, and his judgment so exact, that he made very wise reflections and observations upon all I said. But I confess, that after I had been a little too copious4 in talking of my own beloved country, of our trade, and wars by sea and land, of our schisms5 in religion, and parties in the state, the prejudices of his education prevailed so far, that he could not forbear taking me up in his right hand, and stroking me gently with the other, after an hearty fit of laughing, asked me whether I were a Whig or a Tory. Then turning to his first minister, who waited behind him with a white staff, near as tall as the main-mast of the Royal Sovereign, he observed how contemptible6 a thing was human grandeur,7 which could be mimicked by such diminutive8 insects as I: And yet, said he, I dare engage, those creatures have their titles and distinctions of honour, they contrive little nests and burrows, that they call houses and cities; they make a figure in dress and equipage;9 they love, they fight, they dispute, they cheat, they betray . . . . You have clearly proved that ignorance, idleness and vice are the proper ingredients for qualifying a legislator. That laws are best explained, interpreted, and applied by those whose interest and abilities lie in perverting, confounding, and eluding them. I observe among you some lines of an institution, which in its original might have been tolerable, but these half erased, and the rest wholly blurred and blotted by corruptions. It doth not appear from all you have said, how any one perfection is required towards the procurement10 of any one station among you, much less that men are ennobled on account of their virtue, that priests are advanced for their piety or learning, soldiers for their conduct or valour, judges for their integrity, senators for the love of their country, or counsellors for their wisdom. As for yourself, continued the King, who have spent the greatest part of your life in travelling, I am well disposed to hope you may hitherto have escaped many vices of your country. But, by what I have gathered from your own relation, and the answers I have with much pains wringed and extorted from you, I cannot but conclude the bulk of your natives to be the most pernicious11 race of little odious vermin that nature ever suffered to crawl upon the surface of the earth.
Gulliver's Travels—From Part Three
I walked a while among the rocks; the sky was perfectly clear, and the sun so hot, that I was forced to turn my face from it: when all on a sudden it became obscured, as I thought, in a manner very different from what happens by the interposition of a cloud. I turned back, and perceived a vast opaque body between me and the sun, moving forwards towards the island: it seemed to be about two miles high, and hid the sun six or seven minutes, but I did not observe the air to be much colder, or the sky more darkened, than if I had stood under the shade of a mountain . . . . I took out my pocket-perspective,12 and could plainly discover numbers of people moving up and down the sides of it, which appeared to be sloping, but what those people were doing I was not able to distinguish . . . . But at the same time the reader can hardly conceive my astonishment, to behold an island in the air, inhabited by men, who were able (as it should seem) to raise, or sink, or put it into a progressive motion, as they pleased . . . . They made signs for me to come down from the rock, and go towards the shore, which I accordingly did; and the flying island being raised to a convenient height, the verge directly over me, a chain was let down from the lowest gallery, with a seat fastened to the bottom, to which I fixed my self, and was drawn up by pulleys. At my alighting I was surrounded by a crowd of people, but those who stood nearest seemed to be of better quality.... Their heads were all reclined either to the right, or the left; one of their eyes turned inward, and the other directly up to the zenith. Their outward garments were adorned with the figures of suns, moons, and stars, interwoven with those of fiddles, flutes, harps, trumpets, guitars, harpsichords, and many more instruments of music, unknown to us in Europe.13 I observed here and there many in the habits of servants, with a blown bladder14 fastened like a flail15 to the end of a short stick, which they carried in their hands. In each bladder was a small quantity of dried pease16 or little pebbles (as I was afterwards informed). With these bladders they now and then flapped the mouths and ears of those who stood near them, of which practice I could not then conceive the meaning; it seems, the minds of these people are so taken up with intense speculations, that they neither can speak, nor attend to the discourses of others, without being roused by some external taction17 upon the organs of speech and hearing; for which reason those persons who are able to afford it always keep a flapper (the original is climenole) in their family, as one of their domestics, nor ever walk abroad or make visits without him. And the business of this officer is. when two or more persons are in company, gently to strike with his bladder the mouth of him who is to speak, and the right ear of him or them to whom the speaker addresseth himself. This flapper is likewise employed diligently to attend his master in his walks, and upon occasion to give him a soft flap on his eyes, because he is always so wrapped up in cogitation,18 that he is in manifest danger of falling down every precipice, and bouncing his head against every post, and in the streets, of jostling others or being jostled himself into the kennel.19 It was necessary to give the reader this information, without which he would be at the same loss with me, to understand the proceedings of these people, as they conducted me up the stairs, to the top of the island, and from thence to the royal palace. While we were ascending, they forgot several times what they were about, and left me to my self, till their memories were again roused by their flappers; for they appeared altogether unmoved by the sight of my foreign habit and countenance, and by the shouts of the vulgar,20 whose thoughts and minds were more disengaged.
Gulliver's Travels—From Part Four
When I thought of my family, my friends, my countrymen, or human race in general, I considered them as they really were, Yahoos21 in shape and disposition, only a little more civilized, and qualified with the gift of speech, but making no other use of reason than to improve and multiply those vices whereof their brethren in this country had only the share that nature allotted them. When I happened to behold the reflection of my own form in a lake or fountain, I turned away my face in horror and detestation of myself, and could better endure the sight of a common yahoo, than of my own person. By conversing with the Houyhnhnms,22 and looking upon them with delight, I fell to imitate their gait and gesture, which is now grown into a habit, and my friends often tell me in a blunt way that I 'trot like a horse'; which, however, I take for a great compliment: neither shall I disown, that in speaking I am apt to fall into the voice and manner of the Houyhnhnms, and hear myself ridiculed on that account without the least mortification....
His name was Pedro de Mendez,23 he was a very courteous and generous person; he entreated me to give some account of my self, and desired to know what I would eat or drink; said, I should be used as well as himself, and spoke so many obliging things, that I wondered to find such civilities from a yahoo. However, I remained silent and sullen; I was ready to faint at the very smell of him and his men. At last I desired something to eat out of my own canoe; but he ordered me a chicken and some excellent wine, and then directed that I should be put to bed in a very clean cabin. I would not undress myself, but lay on the bedclothes, and in half an hour stole out, when I thought the crew was at dinner, and getting to the side of the ship was going to leap into the sea, and swim for my life, rather than continue among yahoos. But one of the seamen prevented me, and having informed the captain, I was chained to my cabin.....
As soon as I entered the house, my wife took me in her arms, and kissed me, at which, having not been used to the touch of that odious animal for so many years, I fell in a swoon for almost an hour. At the time I am writing it is five years since my last return to England: during the first year I could not endure my wife or children in my presence, the very smell of them was intolerable, much less could I suffer them to eat in the same room. To this hour they dare not presume to touch my bread, or drink out of the same cup, neither was I ever able to let one of them take me by the hand. The first money I laid out was to buy two young stone-horses,24 which I keep in a good stable, and next to them the groom25 is my greatest favourite; for I feel my spirits revived by the smell he contracts in the stable. My horses understand me tolerably well; I converse with them at least four hours every day. They are strangers to bridle or saddle; they live in great amity26 with me, and friendship to each other.
. Literature of Sensibility: Johnson
Johnson was born in 1709 and grew up in poverty. He could afford only fourteen months at Oxford University. In later years, he was also bothered by financial problems (he had a large household of relatives to support) and was sometimes troubled with depression.
Johnson's wife, a woman twenty years older than he, died in 1752; in her absence, he spent his last years in the company of other intellectuals, with whom he conversed. At the age of 64, he consented to take a walking tour of the Hebrides Islands west of Scotland with James Boswell. Johnson died in 1784. Although his life was not an easy one, Johnson's periodical essays (both moral and critical), his biographies of other writers, his Dictionary, and his conversational skill all illustrate a man whose common sense and feeling of public duty overcame personal problems.
Johnson began writing for The Gentleman's Magazine in 1737, and contributed to that publication until 1746. From 1750 to 1752, he wrote The Rambler essays, which had both moral themes and literary subjects. From 1758 to 1760, he wrote The Idler essays, nearly 100 entertaining essays published in the newspaper The Universal Chronicle. During that time, literary periodicals lived short lives, but Johnson edited or wrote book reviews and articles for many of them.
Johnson's own point of view usually affected the treatment of his subjects and themes in those essays. He was a practical critic who understood and respected the taste of the common, less scholarly reader. He disliked stiff poetic diction and thought that much of Milton's work is too lofty to enjoy. He insisted that literature present truth, that it be believable and realistic, and that it be refreshing enough to be interesting. He stressed that readers should feel pleasure as they read. His own taste had been carefully cultivated; he had read nearly everything of literary value published in English. He was a Tory who preferred to be controlled by a king rather than a Parliament, whose members would undoubtedly struggle for personal, as opposed to national, advantage.
Finally, in spite of Johnson's condemnation of his own laziness, he was an energetic man. In 1755, he published a two-volume dictionary, A Dictionary of the English Language, which fixed the spelling of eighteenth-century English, used quotations as illustrations of usage, and defined words precisely. Immediately, the Dictionary was popular because it helped the rising middle class to establish correctness in word usage and spelling. He also published, in 1779 and 1781, his biographies of fifty-two writers, The Lives of the English Poets, in which he gave details of the writers' lives and evaluated their work. He was a conversationalist who talked with some of the most noteworthy English thinkers of that period, and he was a member of the Literary Club, a group founded in 1764 that met weekly and discussed issues. He was a poet whose poem The Vanity of Human Wishes, published in 1749, has the same theme as his novel-like work Rasselas, published in 1759. Rasselas, hastily written to make money to pay for his mother's funeral expenses, emphasizes the impossibility of complete happiness. If Johnson himself was not happy, he was at least contributing toward the pleasure and education of others.
Literature of Sensibility: Goldsmith
Goldsmith's early life is a story of poverty and missed opportunities. He was probably born in 1730, the son of a rector of a small church in Ireland. He would always remain embarrassed by his Irish dialect and his poverty and would be self-conscious and self-critical. He failed to continue his studies both of law and medicine, and he was turned down for ordination in the Church of England. From 1756 on, he spent several years trying different ways to earn a living: acting, assisting an apothecary, doctoring in the slums, proofreading, teaching, reviewing, and translating. If all these attempts did not enable him to support himself comfortably, however, they did teach him much about human nature and caused him to write clearly and entertainingly, without a stiff, scholarly style.
In the 1750s, a need for periodical writers arose, and Goldsmith's flexibility helped to fill that need. From 1757 to 1762, Goldsmith contributed to at least ten periodicals. During 1759, when he published An Enquiry into the Present State of Polite Learning in Europe, he was also contributing to the periodical The Bee. From 1760 to 1761, the Citizen of the World essays, in the form of letters home from a Chinese traveler in London, appeared in The Public Ledger. In this series, he satirized present-day society by using the detached Chinese observer. He emphasized his beliefs that customs are relative, that one should be tolerant of the customs of other nations, and that simplicity is always best. Characters he introduced in his series are not flat mouthpieces, however. He used realistic details to present characters he liked. Throughout these essays, his style remains entertaining and clear. He has been called one of the most readable writers of his century because his writing is so vivid and fluid. His personality in the essays is thoroughly likable; he refused to be the pedant as Johnson sometimes was. He was not a complete sentimentalist, losing himself in emotions, but he was too sympathetic to be a consistent satirist.
Goldsmith was also a popular poet, dramatist, and novelist. In 1764, he published his poem The Traveler, which states several of his themes, especially on excesses and happiness. He wrote that everything—wealth, commerce, honor, liberty, contentment—has a happiness that, if carried to excess, can produce unhappiness. As a poet, he believed that poetry should instruct as well as please and should be addressed to the public rather than to those few readers seeking scholarly pleasure. He thought that poetry should convey strong emotions and that its form and language should be appropriate to the message. Like his predecessors, he used couplets and broad moral themes and addressed his poems to educated men everywhere. He used conventions such as poetic diction and personification, but his poetry had a new, emotional quality in a more straightforward language. In 1770, he published his most famous poem, The Deserted Village, appealing against the injustices and materialism of the age.
In 1766, to pay his debt to his landlady, he published the novel The Vicar of Wakefield. The plot is very involved. It begins with the members of a happy parson's family enjoying the beauties of the rural countryside. Through their greed to make wealthy matches for their daughters, however, they end up in prison and in disgrace. The Primrose family must learn the danger of false appearances, the evils of ambition, and the importance of moral strength before they can be released from their earthly prison. Goldsmith probed the horrors of prison life and satirized ambitious females even as he heaped sentiment into his happy conclusion.
His play, She Stoops to Conquer, was written in 1771 and produced in 1773. The less successful play, The Good Natured Man, had been produced in 1768. In 1774, he died of a fever from a bladder infection and from worry over a two-thousand-pound debt, according to his friend Joshua Reynolds, the artist. Samuel Johnson wrote his epitaph.
His poem The Deserted Village.
The Deserted Village was one of the most popular poems in the eighteenth century. Its idealization of simple rural life and its sense of a lost past appealed to the taste for the sensitively described, emotional subjects that were popular at the time. The poem was sincerely written with a definite purpose in mind, as revealed by Goldsmith's own words in his dedication to the poem:
All my views and enquiries have led me to believe those miseries real, which I attempt to display ... In regretting the depopulation of the country, I inveigh against the increase of our luxuries; and here also I expect the shout of modern politicians against me.
The "depopulation of the country" of which Goldsmith spoke was the forced moving of the rural poor from the Commons, land that had been available to everyone for grazing without ownership, to recently industrialized cities. The Enclosure Acts were responsible for large groups of landless, "tool-less" poor going either to industrialized cities or to America for a new start. The poem also notes the beginning of industrialized slums.
Although the poem contains a new, sentimental element, it also contains many devices that Milton and Pope used. It laments the passing of an age much as Milton lamented his dead friend in Lycidas; thus, this poem has been called a pastoral elegy. It is written in heroic couplets, a disciplined form Pope used, and makes use of poetic diction and personification. Goldsmith has added sentiment and idealization of rural scenes to the traditional elements. His heroic couplets move slowly and thoughtfully, not at Pope's crisp pace. Finally, his structure seems less disciplined as he repeats his laments to build up emotional impact.
As you read the poem, you should keep in mind Goldsmith's central idea: he stresses how rural self-reliance and innocence have been destroyed by greedy landowners and industrialization. At the end of the poem, the rural virtues—Toil, Care, Tenderness, Piety, Loyalty, and Love—march to the sea with the homeless poor. Even Poetry herself is driven to leave England by the callousness of the new age of greed. Note also Goldsmith's poetic diction in such phrases as "labouring swain" and "sheltered cot." Notice the Eden like description of the village, especially the description of pleasing sounds (auditory imagery). Finally, notice how Goldsmith contrasts Nature with the artifice of the wealthy (Lines 251-264). This theme will become even more important in literature written later.
Historically, the Romantic revolution in England occurred between 1798 and 1837. The term Romanticism, however, refers to a comprehensive movement, or trend, in European thought and arts that began at the end of the eighteenth century. In essence, Romanticism was a reaction—a revolution—against the eighteenth century's neoclassical emphasis on reason, rules, and restraint. Like the philosophical movements of other historical periods, Romanticism is difficult, if not impossible, to define exactly. Specific characteristics, however, can be identified; among these characteristics are an emphasis on individualism, emotion, imagination, nature, simplicity, mystery, and melancholy.
The major causes of the Romantic revolution are best realized by examining the political, social, and economical revolutions that either preceded or coincided with it. You will study these causes and their effects in this section.
The Victorian Age of England—named after the queen who ruled from 1837 to 1901—was a period of continuing change. Although generalizations about the variety of events and ideas that span two-thirds of a century are difficult to make, one can note these specific characteristics: material progress; commercial prosperity; political, religious, and social reforms; scientific and mechanical developments; and conflicting views concerning scientific progress.
Perhaps the greatest political impetus for the Romantic revolution was the French Revolution that began in 1789 with the storming of the Bastille prison by mobs of French people—common people and peasants—who would no longer endure the economic and social hardships imposed on them by an aristocratic society.
By this time in history, England had already lost control of her American colonies. The results of the American Revolution had seemingly justified the colonies' rebellion for the cause of democracy: the independent nation formulated its new government on the principle that each individual has a right to participate in establishing the laws that govern him. Thus, in 1789, a significant number of perceptive Englishmen, including many of the Romantic poets, enthusiastically supported the oppressed French who rebelled for the purposes of "Liberté, Egalité, Fraternité" (liberty, equality, fraternity). The English initially viewed the French Revolution as a cause for a new and better life for the common man. English enthusiasm waned, however, when the revolution became violent and chaotic. Disgusted by the revolution's immense bloodshed, England and other European countries declared war on France. This European alliance against France continued until Napoleon Bonaparte, who at the end of the revolution started his intended conquest of Europe, was defeated at Waterloo in 1815. Although disillusioned by the French Revolution, the English Romantic poets still cherished its spirit-the desire for equality and a new beginning.
England itself at this time was undergoing radical changes, changes that could have led to as bloody a revolution as that of France. The Industrial Revolution, begun around 1750, was a major cause of the myriad changes.
The most obvious change was England's transition from a rural, agricultural society to an urban, industrial society. Indeed, the Industrial Revolution changed the working habits and lifestyles of many people and offered them new opportunities. Because manpower was replaced by machine power, some people had more leisure time to pursue various activities. Consequently, the new middle class took advantage of the opportunity for education. The displaced rural poor found jobs in the cities' factories. Because material goods were machine produced, they became more readily available to the populace. Common people now shared many of the opportunities previously enjoyed only by the aristocrats: leisure time, education, cultural pursuits, and material possessions.
The results of the Industrial Revolution, however, were not all positive; in fact, it created new social and economic ills. The displaced rural poor who found jobs in the factories or the mines soon realized the desperateness of their situation. Men, women, and children worked long, difficult hours in unsanitary and unhealthy environments for meager wages. Their living conditions were no better than their working conditions; they lived in filthy, rat-infested slums because they could afford nothing else.
The new middle class, also, soon realized that their position was not as humanly significant as they knew it should be. The right to vote was reserved only for landowners; the working class, merchants, and tradesmen were not allowed to elect members of Parliament and thus had no representation in government.
The pressing issues of the time resulted in a genuine concern for the rights of the individual, the right to work and live in human decency and equality. Fortunately, England chose a course of effective reform rather than bloody revolt to ensure these rights. Under the pressure of reform groups and movements, England gradually recognized and responded to its obligation to help and protect all its citizens, the poor as well as the rich. Slowly prison conditions were improved, labor laws were passed (protecting especially the rights and lives of children), and hospitals were built. Under pressure, the English Parliament itself finally passed the first Reform Bill in 1832; this bill extended the franchise to all the middle class. Although the working class was not granted the right to vote until the end of the nineteenth century, this first Reform Bill did provide, to some degree, representation for many of the English people. English reform, though slow in actualization, was more realistically effective than violent revolution.
Romanticism—simply defined as a reaction against neoclassical emphasis on reason, rules, and restraint—was more a state of mind than a literary movement. The themes and ideas of the times caused poetry to take certain forms, but the ideas themselves are what determined Romanticism. The Romantic poets, in fact, did not seek to create a new kind of poetry; they simply sought a way to express new ideas, feelings, and beliefs characteristic of the nineteenth-century philosophy.
The effect of the American, French, and Industrial revolutions generated a genuine concern for the rights and dignity of the individual that became characteristic of the Romantic Movement in England. The eighteenth century, with its neoclassical emphasis on rules and restraints, had generally regarded the individual as a limited being who existed within certain boundaries beyond which he could not go. The Romantics, on the other hand, saw the individual as capable of seemingly limitless achievements. In a break with the eighteenth-century belief that a person's value was determined by his social status and financial wealth, the Romantics insisted that each individual is important in and of himself.
In the aftermath of political revolution and in the process of social reform, the sensitivities of perceptive people were sharply awakened. The neoclassical emphasis on reason had left little room for feeling or fanciful flights of the imagination. In the last half of the eighteenth century, responsible thinkers began to realize the aridity of life lived without much regard for feeling. The Romanticism of the early nineteenth century definitely encouraged necessary and meaningful expressions of emotion.
Likewise, Romanticism exalted the imagination of the individual. The Romantic concept of imagination is somewhat complex, yet an understanding of it is important for the study of certain poets. As it is generally used by the Romantics, the term imagination refers to the total working of the mind, to the mind's synthetic action. It allows one to perceive the similarity of things, to perceive that everything that exists is part of an entire whole. Imagination is a process of insight and understanding that eventually brings a person to an ultimate truth. Imagination is the opposite of reason, which analyzes objects and ideas and breaks them down into parts so that they can be studied. Imagination, in contrast, is the sudden intuition, or awareness, of all that one can know about something. To perceive through the imagination is to know at one time an entire body of knowledge on some subject, a knowledge that brings one to a new truth. Imagination is comprehension gained not by study but by meditation—by opening one's entire mind to existing realities. The Romantics believed that imagination allowed an individual to intuit certain knowledge and thus to gain insights otherwise not easily obtained.
The technological progress of the time contributed to the gradual destruction of the natural beauty of the English countryside. Romanticism, with its appreciation of beauty, wherever it may be found, reacted with an undaunted expression of sincere love for nature. Though the views toward nature varied among the Romantics, most saw in it some significance beyond its physical existence—although they took great delight in that alone—and perceived it to belong to the realm of the spirit. On one level, some Romantics viewed nature as having human characteristics, of acting with intent. For example, when a phenomenon of nature occurred, the event was seen as a deliberate action that was done for some reason. This attitude was largely a rejection of the eighteenth-century scientific view of nature as nothing more than a well functioning machine. To many Romantics, nature was a source of meaningful comfort; through communing with nature, an individual could temporarily escape harsh realities and could experience both a physical and spiritual harmony with a higher power and the world. Finally, to some Romantics, nature itself was revered as a god or higher power.
Coinciding with the Romantic return to nature was an emphasis on simplicity. To the urbane neoclassicist, nature had meant the precisely planned garden with its neat rows of hedges and symmetrically designed flowerbeds. The Romantics, in turn, were more interested in the natural beauty of untamed nature: the turbulent ocean, the lonely forest, the jagged mountain. In their art, architecture, clothing, and manners, the Romantics disregarded the eighteenth century's artificiality and strove for simplicity and naturalness.
Although the many social and technological changes occurring in England at the turn of the century brought new opportunities and improved ways of life for some, the changes also created problems, such as the overcrowding of cities, poor working conditions, industrial pollution, and the destruction of villages. These negative effects often made the Romantics wish they could return to a better time. The neoclassicists had looked to ancient Greece and Rome for inspiration and ideal models, but the Romanticists concentrated their attention on more mysterious, distant cultures. Medieval times became a popular subject of the Romantic Movement that sought out the strange, wondrous, and fanciful aspects of the world. The Romanticists delighted in the medieval atmosphere with its sense of mystery and the supernatural. Some Romantic poets, rejecting neoclassical rules and restrictions, established a new interest in the medieval ballads and their truly human themes of heroism, adventure, death, and love.
This interest in the medieval expressed itself in a change in architectural models. Eighteenth-century aristocratic homes had been designed to imitate the dignity and symmetry of the columns and domes of ancient Greece and Rome. Romantic architects, however, found inspiration in the Gothic spires, the high, vaulted ceilings, and the arched columns and windows that characterized the mysterious atmosphere of the late Middle Ages.
Although the Romantic revolt expressed an attitude that may be described as subjective, unconventional, and idealistic, it also conveyed a persistent tone of melancholy. Awed by the beauty of nature, the potential of the individual, the power of imagination, and the need for human improvements, the Romantics were distressed by the belief that life was relatively brief and that they would not have time, therefore, to realize their ideals and fulfill their dreams. This insight created the prevailing tone of melancholy.
A precise statement of the theory of Romantic poets is not possible, for they were a diverse group who did not view themselves as "Romantic poets." They were given that title several years later. The Preface William Wordsworth wrote for the second edition of Lyrical Ballads (published in 1800), however, provides an excellent summary of Romantic poetic theory. In the Preface, Wordsworth explained the theories that he and Coleridge had followed in writing their poems.
Wordsworth and Coleridge had attempted to write poetry that was free of what they considered the artificial restrictions of earlier poetry. For some time, opposition to the Neoclassicism of the eighteenth century had been growing; no longer were witty poetry, verse essays, and satires appreciated. In writing the Preface, however, Wordsworth did not produce a simple list of grievances; instead, he wrote a coherent explanation of the new theory of poetry. Although not all Romantic poets either accepted or used all the elements of Wordsworth's theory, the Preface does provide a general guide to Romantic poetry.
The concept of poet and poetry
Neoclassic poetry was basically an imitation of human life that intended to instruct the reader and to please him. The characters created in such poetry were to represent, or to mirror, all men and were to serve as examples of acceptable or unacceptable behavior. Wordsworth believed, on the other hand, that good poetry is "the spontaneous overflow of powerful feelings." Poetry comes from within the poet's own feelings and thoughts. This concept is shared by all Romantic poets and is basic to the Romantic Movement. All the Romantic poets were concerned that their poetry should emerge from their own minds, expressing individual ideas and feelings. The subject matter of Romantic poetry, then, was often the personal experience of the writer.
Complying with the theory of individualism, the subject of the poem is frequently the poet and is presented in the first person. In order to best suit the subject, the lyric poem was frequently used. Until this time, it had often been considered a minor form. The speaker in such a poem is actually the poet, not a persona, an event, or a narrator of events. He is, as Wordsworth says in the Preface, "a man speaking to men." Many critics object to the frequent use of the first person singular in Romantic poetry and accuse the poet of being too subjective, too self-centered. Subjectivity, however, was what the Romantics sought. They wanted others to experience their thoughts, not out of conceit, but because they believed their position as individuals made their ideas significant.
The spontaneity of poetic creation
Another of the tenets presented in Wordsworth's Preface is that poetry be spontaneous. Neoclassic poets viewed poetry as an art to be studied and perfected; they followed specific rules in writing. The Romantics were different. Many of them expressed the belief that unless it flowed naturally and spontaneously from the poet, true poetry was impossible.
One should not assume, however, that the poetry of the Romantic period now appears as it first came to the poets. Indeed, the worksheets of the poets indicate that they reworked their poems many times until the results were finally satisfactory. The concept of spontaneity as stated was more a reaction against Neoclassic inflexibility than a literal statement of Romantic principles.
The theory of spontaneity is further, and more realistically, explained later in thePreface:
I have said that poetry is the spontaneous overflow of powerful feelings; it takes its origin from emotion recollected in tranquility: the emotion is contemplated till by a species of reaction the tranquility gradually disappears, and an emotion, kindred to that which was before the subject of contemplation, is gradually produced, and does itself actually exist in the mind. In this mood successful composition generally begins . . . but the emotion . . . is qualified by various pleasures . . .
In other words, the poet has a thought or experience that causes in him some great emotion, but he does not immediately and spontaneously create a poem. Instead, he recollects in tranquility the experience and the emotion and creates a similar emotion in his own mind. Yet he does not simply record objectively the experience and emotion; they are "qualified by various pleasures." The poet presents not what is (the objective view) but what exists as he sees it, as it is colored by his own thoughts and feelings (the subjective view). Hence, a poem is not purely spontaneous in that it does not burst from the poet the moment he experiences emotion. Still, a poem is somewhat spontaneous in that, though it is premeditated, it emerges from recollected and recreated emotion and from the artist's perceptions. It is neither planned nor forced to fit external rules, but it is pondered and reshaped by the author's values before it emerges as a poem. Such poetry is significantly more spontaneous than the structured and objective work of the Neoclassics.
The significance of nature
Among the best-known features of Romantic poetry is its use of nature. Romantic poetry is, in fact, often called "nature poetry." Elements of this feature are the detailed images and descriptions inherent to Romantic poetry. Writers of this period were quite attentive to detail; it fills their poetry with exact objects. Many readers appreciate "nature poetry" for this quality alone; however, the use of nature often has much more significance. According to Wordsworth, nature could be enjoyed for its own sake; yet, it frequently provides a means to an end rather than an end in itself. The ultimate end is some new perception. Nature serves as a starting point in nearly all Romantic poetry. Generally, some scene or event in nature triggers the thoughts of the poet; in nature, he sees some emotional conflict relevant to his own life. Ultimately, the thoughts of man as well as nature itself constitute the subject of the poem. Nature is often a vehicle for the true subject.
The Use of the Commonplace
The Romantic interest in simplicity and naturalness manifested itself in a belief that the commonplace and humble life were suitable subjects for poetry. In his Preface, Wordsworth explained that the principal purpose in writing the poems in Lyrical Ballads was "to choose incidents and situations from common life, and to relate or describe them . . . in a selection of language really used by men . . . " A popular Romantic theory was that those who lived close to nature were more innocent than the people who were corrupted by society. This concept of the noble savage goes back to the Renaissance and appears in nearly all Romantic-type writing.
The concept of the innocence of life untainted by society has been greatly criticized for its naiveté; even some of the Romantics refused to use rustic people and settings as subjects. Wordsworth's intention, however, was not simply to present commonplace subjects exactly as they are found. The preceding statement from his Preface continues to say, ". . . at the same time, to throw over them a certain coloring of imagination, whereby ordinary things should be presented to the mind in an unusual aspect . . ." The purpose of using the commonplace in poetry was to add freshness, to show that even in the most trivial elements of life, the human mind and heart can experience wonder.
The Use of the Supernatural
At the other end of the spectrum, the Romantics also made use of the supernatural. The poet would use unnatural, unfamiliar events—often from the medieval past—to create a sense of wonder. At the same time, some poets explored those unusual experiences—such as mesmerism—that others considered foolish or trivial. Those who used such subject matter responded to the social situation in which new possibilities were present. They viewed nothing as unworthy of exploration. Sometimes this proved unfortunate, yet at other times, it may have been beneficial. Whatever the result, the goal was to achieve a sense of newness and wonder in poetry.
The variety of Victorian thought and lifestyles rests primarily on the dramatic contrast between the period's welcomed prosperity and undesirable poverty. In 1851, England hosted the "Great Exhibition of the Works of Industry of all Nations," a magnificent tribute to the progress and prosperity resulting from the Industrial Revolution. Approximately six million people from England and the rest of the world attended the splendid exhibition in London's Crystal Palace, an architectural giant of iron and glass built for the occasion. A contrasting attitude toward industrialism was presented by Charles Dickens in his 1854 publication of Hard Times, a novel that condemns the undeniable evils wrought by industrial progress. Dickens described a town that symbolized the effects of industry:
It was a town of machinery and chimneys, out of which interminable serpents of smoke trailed themselves for ever and ever, and never got uncoiled. It had a black canal in it, and a river that ran purple with ill-smelling dye, and vast piles of buildings full of windows where there was a rattling and trembling all day long . . . [The town was] inhabited by people . . . who all went in and out at the same hours, with the same sound upon the same pavements, to do the same work, and to whom every day was the same as yesterday and tomorrow.
Revolutionized by the use of new kinds of machinery and by advanced means of operating it, industry provided a previously unparalleled source of material prosperity for England's rising middle class in the first half of the Victorian Age. The leaders of industry—owners of the factories and mines—luxuriated in the fine lifestyle that their money could buy. At the same time, this powerful middle class established strict standards of what was "proper and right." Victorian conduct was based on a confident and seemingly inflexible adherence to the virtues of self-reliance, industriousness, temperance, propriety, and moral sincerity. In brief, the major Victorian principle was a practical one: hard, honest work was an honorable means of success.
Not only did individuals experience prosperity during the Victorian Age; the country itself was in a comfortable position. During the nineteenth century, England enjoyed long periods of international peace and established itself as one of the greatest commercial empires in the modern world. In that century of expansion, England gained control of the Suez Canal and either acquired or developed territories in Egypt, India, Australia, Canada, New Zealand, and South Africa.
One of the major contributions to the Victorian "golden age" came from the field of science. Technological inventions and intellectual developments revolutionized not only industry but also medicine, communication, transportation, and traditional philosophies and beliefs. Some scientific theories led to confusion for the common people. Many sincere people believed that they had only three alternatives: to live in doubt, to reject science completely, or to abandon their faith completely. This dilemma initiated the sense of doubt, not only of religion but of many traditional beliefs and values, that characterized the second half of the Victorian Age.
The first half of the Victorian Age, however, knew no such doubt. It was an era of progress and prosperity and of confidence and optimism.
Not all of Victorian England's citizens enjoyed the benefits of progress; indeed, some keenly felt its cruel sting. The poor barely survived. The working conditions and the living conditions of the laborers were deplorably inhumane and presented a sharp contrast with the material well-being of the middle class. Reforms came very slowly.
The social and economic ills of Victorian England were rooted in a kind of political poverty, or deprivation. Denied the right to vote, the lower classes had no effective means of helping themselves and had to rely on the good intentions and actions of groups and writers that advocated reform. The Victorian era is characterized by harsh political conflicts between the liberal Whig Party and the conservative Tory Party to win credit for improving the social, economic, and political conditions of all English citizens. Finally, Parliament extended the franchise to urban laborers in 1867 and to agricultural workers in 1885. Because of these acts and subsequent political reforms, England was well established as a modern democracy by the beginning of the second decade of the twentieth century.
Perhaps the most destructive poverty that England experienced in the nineteenth century was not material, but spiritual. Thomas Carlyle, a respected essayist and critic of the early Victorian era, had warned of the devastating effects of the Industrial Revolution: "Not the external and physical alone is now managed by machinery, but the internal and spiritual also . . . Men are grown mechanical in head and in heart, as well as in hand."
The Victorian Age of variety and contrasts demanded prudence—the ability to judge soundly and to act sensibly—of the individual who desired to succeed personally and publicly. Certainly, England had a model of such prudence in the person of her queen, Victoria. Succeeding to the throne when she was only eighteen, Victoria acknowledged her youth and inexperience but expressed confidence that "few have more real good will and more real desire to do what is fit and right than I have." Indeed, in her sixty-four-year reign, she illustrated that her confidence had been well founded.
As a child, Victoria had been imbued with the virtues of proper social conduct. As a woman, she practiced these virtues in her court and in her home. A capable sovereign and a dedicated wife and mother of nine children, Victoria deserved and received the respect, love, and loyalty of her subjects.
The Victorian writers were contemporary; they wrote about the current social problems and philosophies that marked the complex variety of the age. In general, the Victorian writers looked to the present, rather than the past, and sought ways to comment on it or to improve it. Thus, the typical Victorian writer was realistic, objective, didactic, moralistic, and purposeful.
The Romantic Age was primarily an age of poetry; the Victorian Age was primarily an age of prose. Although you will not study Victorian prose in this unit, you should become familiar with its types and significant authors.
Victorian essays offer serious critical evaluations of the period. Although the major essayists—Thomas Carlyle, Thomas Babington Macaulay, Matthew Arnold, John Henry Newman, and John Ruskin—differed in personalities, backgrounds, subjects, and styles, they were similar in their goal: to analyze and help solve the problems of their day.
The dominant literary form of the Victorian Age is usually considered the novel. Public libraries, improved and cheaper-to-run printing presses, and novels published in inexpensive monthly paperbacks or by installments in magazines provided the eager public with easy access to the talents of the Victorian novelists.
Although Victorian novelists offered a wide variety of subjects and attitudes, many of them concentrated on the issues of the nineteenth century. Chief among those who exposed and protested present problems was Charles Dickens. To suppose, however, that Dickens was simply a contemporary protester would be a mistake. An imaginative genius, Dickens created versatile characters and stories that attracted universal concern and interest. Your education should include the reading of such Dickens classics as Oliver Twist, David Copperfield, Great Expectations, and A Tale of Two Cities to become acquainted with some of the most colorful characters in English literature.
William Makepeace Thackeray was a novelist who shared Dickens's purpose but not his approach. Thackeray's subjects, unlike Dickens, were drawn from the more advantaged middle class. His intent, however, was to expose the inequalities and hypocrisies of his society. One of his best-known works is Vanity Fair, a Novel without a Hero (not to be confused with the "Vanity Fair" of Bunyan's Pilgrim's Progress).
Another novelist who discussed contemporary social and moral issues was Mary Ann Evans. Writing and publishing under the pen name of George Eliot, she is best known for the novels Adam Bede, Mill on the Floss, and Silas Marner.
Eliot was not the only woman novelist of the period. All three Bronte sisters—Charlotte, Anne, and Emily—wrote novels that were widely read. Charlotte (author of Jane Eyre) and Emily (author of Wuthering Heights) are best known for their depiction of English rural life. Anne, the youngest, wrote poetry and two novels, Agnes Grey and The Tenant of Wildfell Hall. Relying heavily on personal experience and emotion, the Brontes created novels that are exciting, touching, and imaginative.
Certainly, the Victorian novel is the genre that most dramatically illustrates the variety—of characters, settings, narratives, and authors themselves—characteristic of the age.
That the Victorian Age is often considered the age of the novel does not minimize the era's poetic contribution. The Victorian poets more than adequately compensated for their relatively small quantity with superb quality.
To cope with the contrast between progress and prosperity on the one hand and social and spiritual questions on the other, poets of the Victorian Age frequently experienced what today is often called an "identity problem." Indeed, the position and role of the poet in society was one of the dominant themes in Victorian poetry. In an age that tended to evaluate according to contributions made toward progress or solutions, the poet needed to have a purpose or a function. Sometimes he was viewed as a prophet who was to project the present into a balanced perspective with the future. At other times, the poet was viewed as a teacher who was to instruct his readers in proper behavior and social values. The roles of the poet were demanding and sometimes frustrating. Frequently the poet, concerned about his roles but not completely confident that he could fulfill them, felt isolated from the society he was supposed to help improve.
Victorian poetry exhibits a variety of techniques and themes. The age is characterized by the development and use of symbols, by the growth of the dramatic monologue, by an interest in tragedy, and by a continuation of the Romantic interest in nature and the medieval. Of course, Victorian poetry depends primarily on themes realistically related to contemporary subjects: science, technology, religion, and reform.
Romantic Poets: Wordsworth (1770-1850)
William Wordsworth was the most influential of the Romantic poets. His Preface to Lyrical Ballads, as you have seen, provides the basic statement of the Romantic Movement.
William Wordsworth was born April 7, 1770, in West Cumberland, the second of five children. After the death of his mother, Wordsworth, at the age of eight, was sent to Hawkshead School in England's picturesque Lake District, an area known for its splendid landscape. Here Wordsworth acquired the deep appreciation for nature that was to remain with him and sustain him for the remainder of his life.
In 1787, Wordsworth went to St. John's College, Cambridge, but because he was not interested in the studies there, he received a degree without honor in 1791. He then traveled to France where he became committed to the revolutionary cause, but he returned to England when his funds were depleted.
Wordsworth lived with his sister Dorothy for several years, during which time he met Coleridge. Wordsworth married in 1802 and had five children. In 1813, he was appointed a stamp-distributor, a position that helped solve his financial problems. In 1843, he was made poet laureate of England. In his later years, Wordsworth grew disillusioned with his Romantic ideals and became a part of the very society he and other Romantics had criticized. Wordsworth died in 1850 at the age of eighty.
Wordsworth's poetic career is divided into two periods. The first period, the years prior to 1807, was the more productive and successful and resulted in the poetry for which Wordsworth is best known.
Though technically of this first period, Wordsworth's earliest work, Descriptive Sketches, published in 1793, is conventional and exhibits little of the talent Wordsworth was to develop. Not until 1798 and the publication of Lyrical Ballads was the poetry typical of Wordsworth evident. This volume of poetry, published in three editions, and his 1807 publication of Poems in Two Volumes comprise Wordsworth's best writing. During this first period of his career, Wordsworth experienced the deep commitment to Romantic thought, to the new poetry, and to the new potential of mankind. The poetry was original and enthusiastic.
By the later part of his career, the poet had grown disillusioned with the values and hopes of Romanticism. He was beset by doubts of the validity of his early beliefs and poems. Many things contributed to this disillusionment: the failure of the French Revolution; his inability to experience the passion of youth; the death of his brother; his alienation from Coleridge; and the inevitable acceptance of life's realities that comes with age. Some poetry of this period exhibits the intense feelings typical of the earlier poetry; however, much of it either questions his changing attitudes or supports the kind of conventional behavior and thought that characterized his personal life in his later years. Although the excitement of youth is gone in this second period, many readers find satisfaction in its writings. As a Romantic poet, however, Wordsworth was more successful in his early poetry.
Lines Composed a Few Miles above Tintern Abbey
This poem, usually referred to as "Tintern Abbey," is probably Wordsworth's best-known work. The abbey referred to in the title, though not mentioned in the poem, is the ruin of the beautiful, medieval Tintern Abbey located in a valley at the edge of the Wye River near Southeastern Wales. Published in 1798 in Lyrical Ballads, "Tintern Abbey" tells of the poet's thoughts upon a return visit to the Wye valley, an area he had first visited five years earlier. The poem examines Wordsworth's concepts of nature, of man's progressively changing relationship with nature, and of the function of imagination in perception. Read the poem the first time to understand the literal content, the events and ideas discussed in the poem. At the same time, pay close attention to the beauty of language and descriptions. Read the poem a second time to look for presentations of Wordsworth's Romantic concepts.
William Wordsworth: Other Poems
Although Wordsworth is perhaps most remembered for his long, meditative poems, such as "Tintern Abbey," he also proved to be a master in adapting to the disciplined rhyme and rhythm patterns of the sonnet form. The first sonnet, which you will read, "London, 1802," is addressed to the poet John Milton, whom Wordsworth greatly admired. In another of his sonnets, "It is a Beauteous Evening, Calm and Free," Wordsworth speaks to his own daughter, Caroline.
Romantic Poets: Coleridge (1772-1834)
Coleridge was born in 1772, the son of a clergyman and the youngest of fourteen children. As a child, Coleridge preferred solitude and spent most of his time reading. When his father died in 1781, Coleridge went to a school that gave free education to orphans. He then attended Jesus College where he performed well until he became interested in politics. When his academic work suffered and he became deeply in debt, Coleridge quit his education and joined the cavalry in 1794. Unable to adjust to military discipline, Coleridge, with help from his brothers, soon received a discharge and returned to the university. Coleridge, however, never graduated.
Coleridge met William Wordsworth in 1795. The two began an exchange of letters and soon formed the close friendship that generated the tremendous ideas and enthusiasm and led to the publication of Lyrical Ballads. Each poet gave something to the other. Coleridge's intense powers of imagination and his genuine admiration of Wordsworth encouraged and enhanced Wordsworth's creativity. Wordsworth's theories of poetry, his view of nature, and his self-discipline encouraged and inspired Coleridge to write his most appreciated poetry.
Unfortunately, this period of productivity and well-being did not continue for long. In 1810, the two poets quarreled and, despite the reconciliation they later achieved, the quality of their earlier friendship was never regained. In addition, Coleridge fell into ill health and began taking a drug to ease his pain. The dangers of the drug were not understood by the doctors of the period, who often prescribed it as medicine, and Coleridge became addicted to it. As a result, Coleridge could no longer create the beautiful poetry of which he had been capable. He continued to suffer pain and the despair of drug addiction, growing disgusted with himself for his inability to break the habit. Although he gave lectures on literary topics, he was never to regain his health or exercise fully his poetic abilities, but he retained sparks of his intellectual genius until his death in 1834.
Though Wordsworth and Coleridge were both romantic poets and shared many beliefs and ideas, their poetry is significantly different. Coleridge's own poetry is varied. His best-known works are those of mystery and magic, but he also wrote many blank verse poems and more traditional odes. Wordsworth sought to take the ordinary aspects of life and present them as extraordinary, but Coleridge sought to take the extraordinary and illustrate what an ordinary reaction would be. Despite the difference in their approaches, however, both men intended to depict the magic or wonder of life beyond the common, physical world. Each sought to break away from the extremes of logic and reason that dominated eighteenth-century thought and to explore the world of the imagination.
The two men also differed in the way they worked. Wordsworth was a more disciplined man who generally followed his own prescription for writing poetry. He experienced a thought, meditated upon it, and then carefully and thoroughly recreated it on paper in the form a poem. Coleridge lacked such discipline and persistence. Although he had great plans and ideas, he often failed to carry them to completion. Much of his poetry is unfinished. He worked in spurts of energy and creativity; many of the poems he completed were written in single sittings. His works reflect little planning and frequently contain brilliant passages in conjunction with more ordinary poetry.
Coleridge often wrote "conversational poems." These poems were a combination of descriptive and meditative verse in which a speaker reflects upon some past event, sometimes describing the event and sometimes meditating upon its significance. The descriptions are quite specific and precise, unlike Wordsworth's general descriptions. These poems display the romantic theory in action: they are spontaneous statements of emotions and reflections.
Coleridge presented his own theories of poetry in Biographia Literaria, his well-known work of literary criticism. He wrote that a poem should require the total activity of all functions of the mind (sensation, intellect, and emotion). Like Wordsworth, Coleridge expected the poem to assimilate or to synthesize, rather than to analyze. They both believed that the imagination requires one to know and understand an idea as a whole, not to analyze its parts.
Coleridge was not a prolific poet. Three of his more famous poems, which are frequently read today, are "Kubla Khan," "Christabel," and "The Rime of the Ancient Mariner." Both "Christabel" and "The Rime of the Ancient Mariner" are written in ballad form and deal with supernatural elements. "Christabel" is a medieval tale of witchcraft in which the conflict between good and evil are personified by the innocent and pure Christabel and the wicked enchantress, Geraldine. "The Rime of the Ancient Mariner" tells of the supernatural punishment and penance of an ancient mariner—an old sailor—who senselessly killed an albatross—a sea bird of good omen. You will soon read "Kubla Khan" in this section.
Although the quantity of Coleridge's poetry is small, the quality of his imagery, word choices, and rhythms has been equaled by few other English poets.
One summer day in 1797, Coleridge read Samuel Purcha's Pilgrimage, a sixteenth-century book about Englishmen traveling in foreign lands. He had just read about the beautiful palace and surrounding landscape of Kubla Khan, the founder of the Mongol dynasty of China, when he fell into a deep sleep. As he slept, vivid and splendid images of this oriental scene filled his mind. When he awoke, he began immediately to write down the poem he had composed in his dream. He had completed only 54 lines, however, when he was interrupted by a visitor. He was never able to remember the remainder of his magnificent vision. To enjoy the poem most, you should read it at least twice: once silently, once aloud.
. Romantic Poets: Byron (1788-1824)
Byron was born in 1788. His life began inauspiciously. His father was known for his questionable behavior; his mother was emotionally unstable, given to moodiness and hysterical fits. Byron himself was born with a deformed leg. When Byron was three years old, his father died and his mother was left to raise him in poverty. Life began to improve for Byron, however, at the age of ten when he inherited his great uncle's title—Lord Byron—and his estate. Byron subsequently enrolled in a good school and began to make friends among the aristocracy. Byron later attended Trinity College where he began to write poetry for publication. From 1809 to 1811, Byron went on the tour of Europe and Asia that he wrote about in Childe Harold's Pilgrimage.
Lord Byron lived a reckless life. He almost seemed to work to acquire a bad reputation; he was known for various antics and incidents of questionable behavior. Byron got married, but the marriage was unhappy and the couple separated. The public reaction to this separation was such vicious gossip that Byron, bitter about what he felt was the hypocrisy of society, left England and never returned. The poet lived in self-imposed exile in Switzerland for a time, then in Italy. In 1823, Byron joined a committee to aid the revolution that had broken out in Greece. While in Greece, Byron contracted a fever and died.
In many ways, Byron was an atypical Romantic. In him was a sense of satire and a cynicism not found in Romanticism. Also evident in Byron's work was a sense of humor. Other Romantics took their work very seriously, but Byron thought it foolish to take work so seriously. Even when he himself thought seriously, he could laugh at his possible excessive concern. Many critics believe that Byron's satirical works, which are not Romantic, are his greatest contribution to literature.
Byron, however, was a Romantic. He insisted on the freedom of the individual, he used himself as a subject of poetry, he loved nature and adventure, and he relied on emotional appeal.
Byron's most notable contribution to literature, however, remains the Byronic hero, a larger-than-life sort of individual that is probably a reflection of Byron himself. All of his qualities are more intense than those of the average person: he suffers more, he feels more strongly, and he is capable of deeds that are more heroic . He is so powerful that only in nature can his abilities be equaled; only there can the same intensity of feeling be found. This greatness isolates the Byronic hero from society and causes him to rebel against it for he sees its weaknesses and injustice.
He is proud and willful and cynical; but beneath that pride and cynicism is melancholy, sadness, because the Byronic hero does not consider himself innocent. He broods over some unidentified and seemingly unforgivable infraction in his past. Considering the world a "place of agony and strife" where he must "suffer" for this sin, the defiant and tormented hero turns in upon himself and gives way to moodiness and melancholy as he wanders from place to place in an attempt to escape himself and the world. This hero, however, takes a masochistic delight in his suffering. Thus, the Byronic hero is pleasantly, rather than desperately, miserable.
Childe Harold's Pilgrimage
Byron wrote this panoramic poem over a nine-year period, from 1809 to 1817. When the first two cantos were published in 1812, they were an immediate success and established Byron's fame. The poem telling of Childe Harold's journey through Europe and Asia Minor can be enjoyed simply as a picturesque travelogue. The character of Childe Harold, however, makes the poem more exciting. Childe Harold is bored with the meaningless pleasures of this world and makes his journey to escape that boredom and meaningless existence.
The poem is long; you will study only part of the third canto. In the first two cantos, Byron wrote of Childe Harold without asserting his own identity. Canto III was published in 1816. By this time, Byron's marriage had ended, and the self-imposed exile spoken of in the poem applied to the poet as well. In Canto III, Childe Harold and Byron were separate characters, but shared many traits. Byron's readers automatically identified the hero with the poet.
Romantic Poets: Shelley (1792-1822)
Shelley was born on August 4, 1792, the oldest of seven children. His family was of a long line of country gentlemen. Shelley experienced no lack of material goods as a child. He was not, however, the conservative, conforming youth one might expect from such a family. Indeed, he was so self-willed and rebellious that he earned the nickname "mad Shelley." Resentful of authority and indifferent to traditions, he was a problem to fellow students and to teachers at Eton, an exclusive boys' school. Later, he was expelled from Oxford University for publishing a pamphlet on atheism. Shelley married Harriet Brooke. After her death, he married Mary Godwin, daughter of William Godwin, a political and social reformer. Shelley left England in 1818 and, like Byron, never returned. He and Mary lived in Italy until his death in 1822. Shelley and a friend died when their boat sank.
Shelley wrote his best poetry during his four years in Italy. The themes of his poetry mirror the themes of his life: love of freedom, idealism, spirit of protest, and love of beauty and goodness. Shelley, the idealist, unlike Byron, the realist, did more than protest and lament life's injustices. Shelley offered a hopeful solution: the power of love. To Shelley, love was the source of goodness and truth, the cure for mankind's ills. His greatest work, Prometheus Unbound, depicts a society in which the forces of evil are defeated and love establishes a golden age of beauty, peace, and goodness.
Shelley's themes are immortalized by the forms of his poetry. His lyrics are characterized by rich, imaginative power, spontaneous melody that reflects the intended mood, and beautiful language. Most of his poetry is intensely subjective.
The inspiration for this poem was an autumn storm that Shelley had witnessed within a wood that skirts the Arno River near Florence, Italy. According to Shelley, the wind was "tempestuous" and the rains began at sunset "with a violent tempest of hail and rain, attended by that magnificent thunder and lightning peculiar to the . . . regions."
This poem is divided into sections, each section consisting of four three-line stanzas followed by a couplet.
Romantic Poets: Keats (1795-1821)
John Keats, the oldest child of a livery-stable keeper, was born on October 31, 1795, in London. When Keats was eight years old, his father died in a riding accident. When he was fourteen, his mother died of tuberculosis, leaving Keats and his brothers and sister in the care of their grandmother. In school, Keats was not particularly interested in books until his last year when an assistant at the school began to encourage him. Shortly after Keats's interest in learning developed, however, he was removed from school by the guardian his grandmother had appointed for the children. The guardian apprenticed Keats to a surgeon. Keats continued to study at night with the school assistant and in that way learned of the great writers of the past.
Even after his apprenticeship and while he was a medical student, Keats wrote poetry. He published his first volume at the age of twenty-two. Eventually he gave up medicine for a career in literature. When Keats's brother George moved to America, Keats replaced him in caring for another brother, Tom, who had tuberculosis.
The year 1818 marks the beginning of the most personally traumatic but poetically creative period of Keats's life. A severe sore throat forced Keats to cut short a walking tour of northern England and Scotland. Returning to England, Keats nursed his brother Tom until Tom's death in early December, 1818. By that time, doctors had confirmed that Keats himself also had tuberculosis. Keats had met and fallen in love with Fanny Brawne in September, and the two became engaged at Christmas time. Keats's increasing ill health, however, was a significant factor in preventing the intended marriage.
On his doctor's advice, Keats moved to the warmer climate of Italy in the early fall of 1820. He died in Rome on February 23, 1821.
In many respects, Keats's poetry is unlike that of other romantic poets. Although aware of the social, economic, and political problems of his time, Keats did not use his poetry as a platform for bemoaning society's ills or advocating reform or revolution. Likewise, Keats did not allow himself to give free reign to his emotions. Indeed, he strove to achieve an emotional restraint and imaginative discipline. Few of Keats's poems focus on the imposing "I" frequently found in Shelley's poetry. Unlike Wordsworth, Keats made no attempt to communicate a philosophy concerning nature and its significance.
In truth, Keats's poetry reflected his life pattern of avoiding dogmatic opinions. Keats's mind was not one that decided on an ultimate truth and rejected other possibilities. Rather he perceived that truth and answers may not be final in some instances and therefore must always remain open to examination.
The one thing of which Keats was certain, however, was beauty—the existence and innate value of beauty. His poetry is primarily a magnificent celebration of beauty wherever it may be found—the physical universe, the charm of medieval days, the culture of ancient Greece. Through his exquisite and ingenious use of imagery, Keats recreated with freshness and excitement the sights, sounds, odors, and textures of the beauty he observed. His poetry verifies his own belief: "A thing of beauty is a joy forever" (the first line of his poem Endymion).
Complementing Keats's poetic ability both to perceive and to provide beauty is his capacity for a kind of imaginative sympathy. Through his imagination, Keats could share in the experiences of others. He usually used this capacity with regard to other people; but, when appropriate, he applied it to animals and objects. One may, probably justifiably, theorize that Keats's own experience with pain and suffering enhanced his sense of empathy.
On First Looking into Chapman's Homer
Sonnets are fourteen-line lyric poems. Typically, a sonnet deals with the subject of love. In English, there are two types of sonnets. First is the Petrarchan or Italian sonnet. It consists of an octave (eight lines) and a sestet (6 lines). The other type of sonnet is called a Shakespearean sonnet. It was popularized by William Shakespeare. The Shakespearean sonnet is divided into four stanzas. Three of these stanzas are four lines long. The last is made up of two lines, called a couplet. Keats is famous for his sonnet "Chapman's Homer."
Keats was first introduced to the Greek classics when his friend and former teacher, Charles Cowden Clarke, gave him a copy of George Chapman's animated translation of Homer's Iliad, the famous poetic presentation of the legendary story about the decade-long war between the Greeks and the Trojans. The translation so thrilled Keats that he read through the night to finish it. By midmorning of the next day, Clarke received a "thank-you note" in the form of the poem "On First Looking into Chapman's Homer," one of the greatest sonnets in the English language.
A sonnet is typically a love poem. In this case, Keats love poem is not written to another person. Rather, it is written about a piece of literature, by which he was deeply moved. Can you identify the problem or situation presented in the octave? What is Keats's resolution to the problem?
When I Have Fears
Keats wrote this sonnet in January 1818. It is one of his most personal poems and one of the few ones in which he speaks in the first person, " I."
Ode on a Grecian Urn
Inspired by his viewing of classical Greek sculptures in the British Museum, Keats wrote this poem to re-create and praise the enduring beauty of the scenes encircling the exterior of an ancient Greek vase. The ode amplifies the poet's belief that "a thing of beauty is a joy forever."
Victorian Poets: Tennyson (1809-1892)
Tennyson was born in 1809 in northern England. His father was a rector who was subject to fits of depression and violence. Though talented, all of his twelve children shared this tendency to be despondent, including Alfred who, though more able to control it, never escaped the feeling. Having access to his father's large library, Tennyson read widely and began writing poetry at an early age. In 1827, Tennyson went to Trinity College, Cambridge. There he became acquainted with Romantic poetry. Though he admired it, he could not fully accept its emphasis on the individual; his primary concern was with the problems of his society. In 1831, Tennyson left school for family and financial reasons but continued his studies at home.
The next twenty years were difficult ones for Tennyson. His works were not particularly successful, and he had little money. In fact, though he became engaged to Emily Sellwood in 1836, financial circumstances and personal responsibilities would not permit him to marry her until 1850. Most significantly, Tennyson's beloved friend, Arthur Hallam, died in 1833. The unexpected death of this young man plunged Tennyson into years of grief and the questioning that often accompanies the bewilderment created by death, especially the sudden death of a young person. Finally, in 1850, Tennyson published "In Memoriam," a lengthy elegy for Hallam in which Tennyson indicates the typical Victorian struggle between faith and doubt. An impressive success, the poem was perhaps a significant cause for Tennyson's succeeding Wordsworth in the position of poet laureate. From 1850 on, Tennyson enjoyed both personal and professional success until his death in 1892.
Among the most characteristic traits of Tennyson's poetry is his ambivalence. The Victorian conflicts between religious faith and scientific theories, between progress and the benefits of the past, between social morality and artistic creation; between one's desires and one's duties, all appear in Tennyson's work. Yet unlike Keats, who enjoyed the process of considering alternate answers, Tennyson agonized over his inability to resolve conflicts.
The function of art is also important in Tennyson's works. Frequently Tennyson wrote didactic poetry in which he saw the poet as a teacher whose duty was to instruct society according to his enlightened view. Tennyson's understanding of the role of the poet resembles Shelley's, who also thought the poet had a vision of the ideal that he should communicate to others.
At times, however, Tennyson wished to avoid social duties and to pursue the aesthetic, spiritual aspect of poetry—such a wish was often a source of frustration for a poet of the practical and purposeful Victorian era.
Tennyson was traditional in his choice of subjects and poetic forms and techniques. His poetry always speaks to his times even though his subjects may be the medieval King Arthur, the ancient Greek hero Ulysses, or a reflection on nature. Serious and morally earnest as well as creative and sensitive, Tennyson wrote poetry resplendent with noble thought, beautiful imagery, and exquisite lyrics.
Victorian Poets: R. and E.B. Browning
Robert Browning was born in 1812 of well-to-do parents. His father was well educated and had a large library. The household was one of great love, a fact that apparently affected Browning positively throughout his life. Browning remained in his parents' home until he was thirty-four; then he married Elizabeth Barrett, an invalid who was a popular poet of the time.
Despite a fragmented education, Browning started to write poetry when he was quite young. His poetic ability developed slowly; consequently, it was not until 1842, when he was thirty, that Browning discovered a type of poetry that suited him. The combination of successful poetry and his marriage to Elizabeth made him a contented man. Primarily because of Elizabeth's ill health, the couple moved to Italy after their marriage where they remained until Elizabeth died in 1861. Browning continued to write until his death in 1889.
Early in his poetic career, Browning was an admirer of Shelley. In fact, his first work, Pauline, was consciously modeled after Shelley. When John Stuart Mill, one of the foremost critics and essayists in all of English literature, accused Browning of self-worship in the poem, Browning was so humiliated that he vowed to avoid the subjectivity typical of Shelley from then on.
Ultimately, Browning's rejection of subjectivity led to his development of the dramatic monologue. By definition, a monologue depends on one, and only one, speaker. In the dramatic monologue, the reader overhears one character speaking to one (or more) other characters; the listener never speaks, though his personality may become quite clear. The poet himself does not interject explanations or comments, but relies entirely on the speaker's words. Browning's masterful use of the dramatic monologue permitted his characters to reveal their own innermost thoughts and feelings as well as their often less than admirable character traits.
In addition to his perfecting the dramatic monologue, Browning is noted for his reliance on psychology, a developing science in the nineteenth century, in his poetry. Intrigued with the human mind and personality, he created poetry that often provides brilliant insights into human character and motivation.
Although Browning rejected the subjectivity of Shelley, he retained an idealism and a sense of striving for goals. Somewhat ironically, these romantic traits helped create in Browning's poetry his most typically Victorian characteristics: energy and vitality. Like the Victorians of the first half of the period, Browning optimistically believed that his goals could be reached, and he worked toward them with joy and enthusiasm. Nor did his idealism blind Browning to the harsh realities so evident in Victorian society. In his portraits of people, he vividly exhibited the evil of which men are capable; he never glossed over the unpleasant elements of life. His joy, optimism, and faith always sustained him.
My Last Duchess
"My Last Duchess" first appeared in 1842 in Dramatic Lyrics. The speaker, one of the main characters in the poem, is an Italian Renaissance Duke who addresses a second character, a listener who is present, but who does not talk. As in all dramatic monologues, the entire story is revealed through the speaker. In interpreting the poem, you must analyze the speaker's words to determine what they reveal about his own character as well as the character of those to whomo and about whom he speaks.
Reading Fiction and Poetry
It is often said that art imitates life. The world of fiction and poetry is a reflection of the world in which we live. Although the events in a short story or a novel are not true in the sense that they actually happened, they are patterned after the things that the author knows to be true about life and human nature. Literary characters behave as real people might behave in their situation. Poetry, too, reflects the world as the poet sees it; it stirs in the reader emotions similar to those the poet feels and paints with words—the vision that the poet sees.
By reading fiction and poetry, a student increases his understanding of people both in his own time and culture as well as societies and times other than his own. Stories and poems written by other students can often provide as many insights as the works of professional writers. Each person has something to say. Each person has a view of man, a sense of right and wrong, and a conception of the meaning of life.
Viewed from different perspectives, the experience accumulated by every person can form the basis for many stories and many poems. Creative writing is more than a constructive outlet for your emotions and a source of entertainment for your reader. Equally important, creative writing exercises your skill with words and provides you with an insight into the fiction and poetry of professional writers that you could gain in no other way.
In this unit, you will study the elements of the short story and of poetry as they relate respectively to a particular story and to specific poems. In subsequent lessons, you will study the techniques for writing both short stories and poetry. You will learn to write descriptively, to develop an ear for dialogue, and to connect the separate elements of your narrative into a story that is more than the sum of its parts. You will also gain the skills needed to link the sound of words with the images that they project into the reader's mind. You will write both a short story and a poet
Although a distinction can be made between prose and poetry somewhat like the difference between speech and song, fiction is more closely related to poetry than to prose forms such as the essay and the report. Both fiction and poetry are emotional experiences; both are products of the imagination. Although the concern of the fiction writer or the poet is not with fact, his work reflects his experience. Each life is unique, yet each repeats the timeless pattern of birth, love, and death. Each person experiences the same needs for food, for warmth, and for companionship. Such universal experiences—experiences common to all people—form the basis of both fiction and poetry.
Fiction is a prose account of significant events in the lives of imaginary characters, human or nonhuman. The two most common forms of fiction are the novel and the short story. In a novel, the narrative involves a series of incidents that may affect the lives of many characters. In a short story, the focus is on one central person and one major situation. Plot, characterization, setting, and theme are concentrated into a few pages.
Poetry, like fiction, may tell a story; poetry also may simply describe or react to a single experience. Poetry has been written about human relationships, about love for other people, about death, about the smell of a rose, about the song of a nightingale, and so forth. Poetry can be written about almost any subject, but it is always an intense emotional reaction demanding an equally intense emotional response from the reader. Poetry can be humorous, but the humor is a response to the situation.
This section is concerned with the elements that distinguish the short story and the poem from other forms of literature. Your study will help you to appreciate works that are your literary heritage, and it will provide a foundation for the technical writing skills that you will learn in the later sections of this unit.
Short Story Fundamentals
Fiction involves relationships. What happens to the people in a story is less important than their reaction to the events and the effect of this reaction on their relationships with one another. Whether the event is a natural disaster or a move from the farm to a city, its significance is in its effect on the people and their effect on one another.
The most important element of a literary short story is its characters. Something must happen to the characters- they cannot be exactly the same people at the beginning of the story as at the end of it. A character sketch is not a story. Since events do not occur in a vacuum and since the society in which people live governs their lives to a remarkable extent, the story must have a background, or setting. This setting helps the reader to visualize the characters and their actions.
A good story contains a protagonist and an antagonist. The protagonist is the main character in the story. Typically, he is the "hero" because the audience is able to identify with him and wants to reader more to find out what happens to him. The antagonist is in direct opposition to the protagonist. Typically, the antagonist is the "villain." A round character (also called a dynamic character) is fully developed and complex. He is three-dimensional, like a real person with hopes, dreams, motives, and flaws. A flat character (also called a static character) is not as developed. He is usually one-dimensional and defined by one or two traits.
Even though most literary fiction is character-driven, plot still has a major effect on whether or not the story is successful. Plot is the sequence of events that arises when characters are introduced to and forced to overcome conflict. You may remember that the plot can be charted by four elements: exposition, rising action, climax, and denouement. The exposition introduces the characters and setting as well as any background information needed to understand the story. The rising action builds as the conflict—or crisis—is introduced. The climax occurs when the story builds to its most exciting or dramatic moment. The denouement refers to resolution of the conflict or crisis.
. Poetry Fundamentals
The appearance of poetry on a page and its sound as it is read aloud distinguish it from prose. Poetry reveals universal experience not through the actions of a character but through selected images projected by the poet into the reader's mind. Fiction requires a series of incidents to arrive at a climax, but poetry can be based on a single experience, physical or mental, real or imagined. Neither action nor dialogue is required.
Although poetry, like fiction, must have a theme, the other elements of short story writing are not requisites of poetry. Plot, character, and setting are elements of many poems, particularly ballads and epics, but none is essential to poetry.
The appeal of poetry is primarily emotional, not intellectual. A poet is not making a point so much as sharing his view of the world or of one particular aspect of life. Robert Burns might have had considerable difficulty in writing a short story about a louse crawling on a lady's bonnet, but his poem on the subject has delighted many readers. The poem succeeds because Burns knew how to make an image work for him. He created pictures in the reader's mind without diverting attention from the sound of the words. The sound, in turn, reinforces both the meaning and the tone of the poem.
Any good poet is aware as he writes of the literal meaning of his words, their connotations, the images they project, and their sound as they are read aloud. He increases the power of his words through such figures of speech as simile, metaphor, irony, hyperbole, and metonymy. A poem is typically much shorter than a work of prose, which means that a poet must compress his meaning. Each of these literary devices helps the poet to do so. Because poetry is meant to be heard by an audience, poems often use sound effects such as rhyme, meter, alliteration, assonance, consonance, and onomatopoeia to construct the desired effect.
As you know, poetry is emotional, rather than intellectual. In order to discuss a poem, several terms can be used to describe how the poem works on an emotional level. You can discuss mood or atmosphere (the feelings evoked by the work, such as sadness, loneliness, mystery, etc.) and tone (the speaker's attitude toward the subject) as well as setting (the descriptions of the place and time in which the poem occurs). Below is one of Robert Frost's early poems. Read through the poem and think about the mood that the poet creates. How do you feel as you are reading the poem? Are you sad, scared, laughing, melancholy? Think about how the poet uses various images to create these feelings. Keep in mind that imagery includes sights, sounds, smells, tastes, and tactile sensations (i.e. softness, coolness, warmth, etc.).
Imagery appeals to all the senses, particularly to sight, as word pictures are transferred from the mind of the poet to the mind of the reader through the medium of words. The reader, however, does not have to rely entirely on his imagination to appreciate poetry. Poetry is meant to be read aloud, and its sound patterns enhance the imagery and reinforce both the tone and the atmosphere of the poem. Often a word or phrase is in itself a form of aural imagery. In addition, sound patterns can tie related ideas or images to one another or can emphasize significant ideas through repetition. Repeated sound patterns distinguish poetry from prose. These patterns and their appropriateness to theme, imagery, atmosphere, and tone make good poetry memorable.
The sound patterns most commonly associated with poetry are meter and rhyme. Neither meter nor rhyme is essential, however, and free verse is not prose despite the lack of these peculiarly poetic elements.
The rhythm of metered poetry is predictable and consistent, measurable in units called feet. A foot in English poetry usually consists of one stressed, or accented, syllable and one or more unaccented syllables. A poet is not confined to the use of his dominant foot; he can, and does, alter his meter occasionally to emphasize or simply to avoid monotony.
The most common feet in English poetry are the iamb, the trochee, the anapest, and the dactyl. The illustration below illustrates the basic distinctions among these feet.
Metered poetry is frequently, but not always, rhymed. Rhymes are of two types: true rhyme and slant rhyme. The correspondence of final sounds is exact in true rhyme but not in slant rhyme. Balloon and lagoon are true rhymes. Balloon and gallon, despite the shared l sound, are not. The vowel sound of the final syllable and the position of the stressed syllable prevent the rhyme from being exact.
Rhyme occurs most frequently at the ends of lines, although words within a line may rhyme. Other techniques used to emphasize similar concepts through similar sounds are alliteration, assonance, and consonance. All three techniques are based on a correspondence of sound. Alliteration is the repetition of initial consonant sounds or consonant sounds in accented syllables. The m's in MacLeish's line, "memory by memory the mind," illustrate alliteration. (Although modern alliteration is primarily of consonant sounds, alliteration is also possible with initial vowel sounds. This technique was used in Old English verse.) Assonance is a repetition of vowel sounds within words, that is, that are followed by different consonants. The words cat, map, and castle illustrate assonance. Consonance is the repetition of consonant sounds, especially the final sounds of accented syllables; the vowels that precede the consonants differ, such as in strong and bring. The use of assonance and/or consonance at the end of lines is usually what distinguishes slant rhyme from true rhyme.
Another sound effect is onomatopoeia, imagery for the ear. A word that imitates a sound, such as buzz or clang or whisper, is onomatopoeic.
Poets use repeated rhythm patterns to unify a free verse poem. Repetition of key words and phrases also produces a unified effect and at the same time emphasizes the words. Similarly, a poet may repeat a pattern with different words, as MacLeish did with "memory by memory" and "twig by twig."
Writing the Short Story
For many writers, the hardest part of writing a short story is thinking of an idea for one in the first place. Even professional writers have to search for ideas. Their source is their own experience; your source must also be your experience.
Your experience is not as limited as it may seem, for it involves more than events that have actually happened to you. You have watched incidents in which you did not participate; you have read; you have heard others tell about their lives; you have dreamed. All of these experiences are stored in your memory.
Memory is not infallible, however. Many writers keep a journal in which they record not only the events of the day but descriptions of the sky or notes on gesture observed or a conversation overheard. The first rule of creative writing is to observe, write down, and remember.
Your experience is the guidebook that tells you what is plausible and what is not. Unless you are writing fantasy, the pigs in your story will not fly and the trees will not talk. Even in fantasy, the behavior of the characters and the resolution of the conflict must be consistent with what might happen in that imaginary world.
A beginning writer will do best to write straight fiction and to avoid fantasy. He should not attempt science fiction unless he has a thorough knowledge of chemistry and physics, nor should he set his stories in exotic countries or far-off times unless he has studied his history and geography. If he has lived in France, he knows the country well enough to write about it; if he has visited it only through the National Geographic, he does not.
By the same token, the characters in a story should speak a language and dialect with which the writer is thoroughly familiar and which his readers will understand. The characters should dress and behave in ways that seem natural to the writer and that are easy for him to describe. They should not, however, be copies or caricatures of real people. The best characters are composite characters, with physical features and personality traits borrowed from many real people. The writer need not consciously search his memory to remember the appearance of a red-haired woman or the behavior of a peevish child. He has seen enough of each that his mind automatically forms a composite of them all. He can consciously assign other characteristics to make his particular woman or child an individual.
Plot, too, can be based on experience. Everyone experiences moral or ethical dilemmas, for example. Most people at some time have to cope with a crisis. Living without electricity for a night is a crisis of sorts. So is taking a test for which one has not studied. Doing anything for the first time, whether it is driving a car or going to kindergarten, involves an element of risk and the conquering of fear.
Writing from experience should not be interpreted as writing autobiography in short-story form. Fact and history are not fiction. What a writer learns from his experience is that certain patterns repeat themselves, that particular types of people are likely to hold particular attitudes, that certain situations or conversations are plausible and others are not.
Directly or indirectly, every person knows about death. Every life involves births and weddings, graduations and promotions, successes and failures. Friendship and enmity, triumph and tragedy, joy and sorrow are only part of the spectrum of experience shared by all and yet unique to each person. Each short story also must be unique and yet universal.
Fiction is not real, but it is based upon reality. Experience can be transformed into fiction by changing its aspects. Something that happened to your sister can be made to happen to a friend. Instead of being the age he is now, that friend can be made into a child. Since your friend is not childish, you can borrow some personality traits from another friend's younger brother. The setting also can be changed. With such alterations, the fictional account may become totally different from the real event.
A story can be developed from any of the four elements—plot, character, setting, theme. Start with a conflict, build it into a plot, and people the plot with characters. Some choose a plot and characters to fit their theme. The method depends partly on the writer's preference and partly on the type of story that he intends to write. Action stories depend upon plot, as do mystery stories. Fiction about relationships and the effect of events upon attitudes often starts with a character. Begin with the element that is most important to your story.
To help his reader visualize the setting and characters, a short story writer must be able to write vivid descriptions. At the same time, he must move the plot forward and reveal his characters' motives and personalities, not by telling the reader but by showing him. In other words, he must demonstrate that a character is honest or untrustworthy or whatever by means of the character's own words and actions. To accomplish these tasks he must master the writing of dialogue and of narration.
In a well-written short story, the three types of writing flow into one another, and the reader does not distinguish among them. Description, dialogue, and narration must be smoothly integrated. The best preparation for writing short stories is to read fiction by masters of the craft, noting how all three types of writing are used to develop the characters, set the scene, advance the plot, or emphasize the theme. Your teacher or a librarian can recommend stories. You may also find excellent examples in literary magazines. Some authors to look for include John Updike, John Steinbeck, Stephen Vincent Benet, and Eudora Welty.
Description. A description is not a group of adjectives strung together. A well-chosen adjective is fine, but a figure of speech is often more effective. A descriptive paragraph, however, should not be overburdened with figurative language or made ridiculous by a mixed metaphor.
One of the most memorable characters in literature is Charles Dickens's Uriah Heep. Almost anyone who has read David Copperfield remembers the "cadaverous face" and "long, lank, skeleton hands," a drawn-out description that, like poetry, matches sound to sense. The phrase is an implied metaphor: Dickens could have written skeleton-like. The adjectives unshaded and unsheltered, referring to Uriah's eyes, also are implied metaphors; his eyes are empty windows without shades or canopies. The order of details is important. The reader first sees the cadaverous face, the most striking detail, then the other facial features, next the shoulders and neck, and last, the bony hand. Dickens follows his character's eye until it reaches what to him is most significant, the hand.
If a paragraph length description is given, a writer should follow Dickens's formula: most striking detail first, most significant detail last. In a short story, however, little time or space is available, and every word must work toward advancing the plot or revealing character. A better practice is to sprinkle description among passages of narration or dialogue; this technique allows the reader to absorb a little information at a time. Recall, for example, Dickens' sentence: "I found Uriah reading a great fat book, with such demonstrative attention, that his lank forefinger followed up every line as he read, and made clammy tracks along the page (or so I fully believed) like a snail." The first part of the sentence is narrative, and the last part is a description of behavior, but the effect of the whole sentence is to reinforce the reader's mental picture of a bony, clammy hand.
A good short story writer does not describe each character as he is introduced. A brief, general description may be provided, but specific bits of information are more likely to be slipped into the narration along with the dialogue.
In addition to its obvious value in the finished story, descriptive writing has another use. A writer can use it for his own benefit before he begins his story. To be certain that he pictures the setting clearly in his mind, he can write a description of it for his own benefit to clarify any fuzzy or contradictory details.
A similar method works well for characters. A writer can begin with a physical description and expand it to include details of personality and background. Such a description can include incidents of the character's past life that have significantly shaped or altered his personality. A character sketch of this type is an invaluable aid in getting to know characters that are of particular importance in a story or those who appear as mere shadows in the writer's mind. A character sketch, although no one but the writer ever sees it, is a means of making an imaginary person real. It is perhaps the best way of coming to know a character.
Before attempting a sketch of an imaginary character, practice with a real person, preferably a stranger or someone you see frequently but do not know well. Study his appearance—not only his age and physical features, but his clothing, his grooming habits, and his posture. These things, combined with mannerisms and speech habits, tell much about a person. Hands in particular, are revealing. You should be able to guess fairly accurately the type of work a person does, his educational level, his marital status, his health, and his attitude toward appearance.
In fiction as in life, people tell one another what is happening in their lives, how they feel about particular situations, and what they hope for the future. Conversation is the primary way in which people relate to one another. Dialogue can be used to explore relationships or to advance the plot. Words speak at least as loudly as actions in revealing personality and character.
The words that a character speaks must suit not only the character's personality but his age, his interests, and his occupation. At the same time, they must resemble actual conversation. People interrupt, contradict, and correct one another. They repeat themselves or start one thought and leave it to begin another. Dialogue should resemble actual speech and at the same time be easy for a reader to follow. Crisp, short sentences are best. Questions are fine. Keep the characters' speeches short and make sure the reader knows who is talking.
Narration and Style
Style alone does not distinguish a good story from a bad one, but a bad style will ruin a good story. Unless a writer can use dialect as skillfully as Mark Twain, his first stylistic prerequisite should be the use of Standard English. The subject matter of his story will determine whether his diction is formal or informal. Lofty themes require formal language, but stories of ordinary people in usual or unusual situations can be more colloquial. Dialogue is usually less formal than the surrounding narration and description.
Use correct grammar. Exceptions can be made in dialogue spoken in casual circumstances, especially by children. If you break a rule of grammar, know why you are breaking it. Be sure, too, that the error is clearly your character's, not your own.
Use words correctly. Again, a character can be made to look ridiculous through the misuse of words, but he should be corrected by another character frequently enough to illustrate that the mistakes are obviously his and not yours. Know the meaning of every word you use. If you find yourself resorting to a thesaurus to avoid using an ordinary word, make sure that the word you substitute will be understood by the reader. Proclaim is better than promulgate in most short stories.
Avoid overuse of objectives. You need not be as stringent as Hemingway, but remember that similes and metaphors can be used as adjective substitutes. Avoid clichés, however.
Do not be afraid to use said. Asked, explained, and shouted are acceptable substitutes in specific situations, but none has the versatility of said. Expostulated, pontificated, or even asserted draws attention away from what is being said.Smiled is not a synonym for said; neither is laughed. Smiling while talking is possible but not probable. "I can't," Jack smiled might better be written, "I can't," Jack said through his teeth.
Be conscious of every word that you use; consider not only its literal meaning but its connotations and its sound. Avoid obtrusive alliteration and awkward phrasing. Eliminate unnecessary words and details. Be concise as well as precise. Be clear. Be dramatic.
Writing a short story without planning it first is like baking a cake without following a recipe. If you forget an ingredient or put it in at the wrong time, the story, like the cake, will collapse.
Because knowing your characters and understanding their motivations are essential, writing a character sketch, as suggested in the previous lesson, for each major character is time well spent. A plot outline is also of value, particularly if the plot is intricate. A plot outline also will reveal weaknesses in the action framework. A plot must involve a problem, its complications, a climax, and a resolution. Someone or something must provide the conflict.
Once you have written your character sketches and plot outline, you can begin the actual writing. When the story is written, it should be read, revised, and reread.
Before you begin writing, visualize the story mentally from beginning to end. Be sure that you have not resorted to a trick ending and that you have left no dilemma unresolved. Know the time and place in which each scene of the story is to occur.
With your character sketches and plot outline beside you for easy reference, sit down with a blank piece of paper. Visualize the setting for the opening scene. The first paragraph should set the scene and introduce the main character. For example:
Brushing the snow from her coat and gloves, Cathy glanced at her reflection in the shiny darkness of the glass door. She caught her breath sharply and went inside. A little bell jingled a welcome and she smiled, looking around at the shelves and stacks and savoring the smell of new books.
Note that you know without being told directly that it is winter, that Cathy is nervous, and that she has entered a bookstore. The next paragraph should introduce the situation: in this instance, a new job. The introduction of a new character, the employer, should clarify the situation through dialogue. Dialogue also is a good means of introducing or intensifying a conflict. Narration can be used to summarize the less significant events of Cathy's day and to provide transitions between the steps of the outline.
Writing the Poem
Like fiction, poetry is a comment on experience. A poet expresses his attitude toward some aspect of life through concrete images. He may reach beyond the expression of emotion to criticize some wrong that he perceives. Poetry is a means of sharing values as well as emotions.
A poet writes about what is important to him. Often a poem is the direct result of some memorable experience. It may be a cry of outrage or an outpouring of joy. It may poke gentle fun at the ridiculous, parodying a sentimental poem with an ode to a road sign. A poem also may be written as a conscious attempt to explore a theme, such as Robinson Jeffers's "Shine, Perishing Republic," or as a response to some inescapable condition, such as John Milton's sonnet, "On His Blindness." Occasional poetry is written to commemorate a historic event (an occasion). An example is Ralph Waldo Emerson's "Concord Hymn," written for the dedication of a battle monument.
Great poetry is memorable not because of the theme that inspired it but because the poet was a master of technique. Shakespeare's sonnets are, for the most part, variations on the same theme; but each is an individual creation and many are masterpieces. Shakespeare knew how to make figures of speech paint his word pictures for him. His best sonnets also make use of sound patterns, such as the repetition of r in "Nor Mars his sword nor war's quick fire shall burn."
The only way to learn the craft of poetry is to practice it, but reading the work of skilled poets is also essential. A student owes himself the pleasure of reading John Keats's "Ode to a Nightingale" and Matthew Arnold's "Dover Beach." He should be familiar with the works of John Donne, Percy Bysshe Shelley, Robert Browning, Walt Whitman, Emily Dickinson, T.S. Eliot, William Butler Yeats, Vachel Lindsay, and Robert Frost, to name only a few of the important English and American poets.
As you search for good models of technique, keep in mind the following caution:
This aspiring poetess did not heed the warning. She was using the ideas and much of the syntax of a well established poet. As a result, she was doing little more than paraphrasing and was certainly not crafting her own poetry.
Figures of speech and recurring sound patterns distinguish poetry from prose. Many poetic devices are found in prose, but two belong specifically to poetry. The best writers of free verse are also masters of rhyme and meter, for these two elements of poetry discipline a poet and train his ear.
Rhymed verse follows a pattern called a rhyme scheme, an element of poetry that you have already studied. One of the simplest of all verse forms, the clerihew has a rhyme scheme of aabb, with no particular meter. The only requirement is that the first line must contain only a famous name. The remainder of the "poem" is a comic verse relating to the life or career of the person in question.
Adding meter to rhyme makes verse writing considerably more difficult. The combination of meter and rhyme scheme must be suited to the poem's content, theme, and tone. If you fail to produce the desired effect with a particular poem, you may find that changing either the meter or the rhyme scheme improves the poem. Most poets are most comfortable writing iambic pentameter, but you may find a dactylic line more fun to work with in writing comic verse.
Style and Form
Although form and diction vary with the type of poem, each poet has favorite rhythms and devices that distinguish his style from that of other poets. Your poetry should be as individual as you are, but it should not be so different from anyone else's that another reader finds it incomprehensible. Until you can write good poetry in Standard English using standard punctuation, do not attempt to write in dialect or to use experimental punctuation. Emily Dickinson and E.E. Cummings are excellent poets, but their styles are their own and should not be imitated. (Emily Dickinson is known for using dashes, and E.E. Cummings eliminated capitals, even from his own name.)
The form a poet chooses depends upon his reason for writing. If he wants to tell a story, he will probably use a ballad stanza. If he has developed an extended metaphor with a surprise twist at the end, his best choice would be a Shakespearean sonnet. Other forms serve other purposes. If no form suits the poem, or if the poet chooses to disregard them all, he can develop his own from various combinations of meter and rhyme scheme. Many modern poets prefer to write free verse, with no restrictions on form at all.
Metered verse. A well-known form of metered verse is the sonnet. All sonnets, as you should recall, have fourteen lines of iambic pentameter.
The Petrarchan, or Italian, sonnet differs from the Shakespearean sonnet in choice of rhyme scheme. Elizabeth Barrett Browning's Sonnets from the Portuguese used the Italian form; many of them introduced a problem in the octet and resolved it in the sestet. Like other English-speaking poets, however, Browning followed the rhyme scheme more faithfully than she followed the separation of thought in the octet and the sestet, and some of her sonnets resemble Shakespeare's in spirit if not in form.
Another fixed form is the heroic couplet, two lines of rhymed iambic pentameter. This form is often used for epigrams.
The limerick, five anapestic lines rhymed aabba, is a popular humorous verse form. The first, second, and fifth lines are trimeter; the third and fourth lines, which introduce a new thought completed in the fifth line, are dimeter. Iambs are common as substitutes for anapests, especially in the first line, which often begins with there was. The last line is frequently ironic.
A more challenging fixed form is the villanelle, which uses only two rhymes in its nineteen lines. The first five stanzas are tercets, or triplets, rhymed aba. The final stanza is a quatrain that reinforces the a rhyme: abaa. Possibly the best villanelle in English is Dylan Thomas's "Do Not Go Gentle into That Good Night."
Although free verse requires neither meter nor rhyme scheme, it is not formless. Skilled poets realize the close relationship between sound and imagery, and they reinforce their images through internal rhyme (or internal slant rhyme), alliteration, onomatopoeia, and other sound effects, as well as through repeated phrases and rhythm patterns. Figures of speech are part of many free verse poems, and symbols are as likely to be found in free verse as in any other type of poetry. The following poem is written in free verse:
|
Assemblers are the simplest of a class of systems programs called translators. A translator is simply a program which translates from one (computer) language to another (computer) language. In the case of an assembler, the translation is from assembly language to object language (which is input to a loader). Notice that an assembler, like all translators, adds nothing to the program which it translates, but merely changes the program from one form to another. The use of a translator allows a program to be written in a form which is convenient for the programmer, and then the program is translated into a form convenient for the computer.
Assembly language is almost the same as machine language. The major difference is that assembly language allows the declaration and use of symbols to stand for the numeric values to be used for opcodes, fields, and addresses. An assembler inputs a program written in assembly language, and translates all of the symbols in the input into numeric values, creating an output object module, suitable for loading. The object module is output to a storage device which allows the assembled program to be read back into the computer by the loader.
This is an external view of an assembler. Now we turn our attention to an internal view, in order to see how the assembler is structured internally to translate assembly programs into object programs.
Appropriate data structures can make a program much easier to understand, and the data structures for an assembler are crucial to its programming. An assembler must translate two different kinds of symbols: assembler-defined symbols and programmer-defined symbols. The assembler-defined symbols are mnemonics for the machine instructions and pseudo-instructions. Programmer-defined symbols are the symbols which the programmer defines in the label field of statements in his program. These two kinds of symbols are translated by two different tables: the opcode table, and the symbol table.
The opcode table contains one entry for each assembly language mnemonic. Each entry needs to contain several fields. One field is the character code of the symbolic opcode. In addition, for machine instructions, each entry would contain the numeric opcode and default field specification. These are the minimal pieces of information needed. With an opcode table like this, it is possible to search for a symbolic opcode and find the correct numeric opcode and field specification.
Pseudo instructions require a little more thought. There is no numeric opcode or field specification associated with a pseudo-instruction; rather, there is a function which must be performed for each pseudo-instruction. This can be encoded in several ways. One method is to include in the opcode table, a type field. The type field is used to separate pseudo-instructions from machine instructions and to separate the various pseudo-instructions. Different things need to be done for each of the different pseudo-instructions, and so each has its own type. All machine instructions can be handled the same, however, so only one type is needed for them. One possible type assignment is then,
Other type assignments are also possible. For example, in the above assignment, all machine instructions have one type, and are treated equally. This allows both instructions such as
Another approach to separating the pseudo-instructions in the opcode table is to consider how the type field of the above discussion would be used. It would be used in a jump table. In this case, rather than using the type to specify the code address through a jump table, we could store the address of the code directly in the opcode table. In each opcode table entry, we can store the address of the code to be executed for proper treatment of this instruction, whether it is a machine instruction or a pseudo-instruction. This is a commonly used technique for assemblers.
Other opcode table structures are possible also. Some assemblers have separate tables for machine instructions and for pseudo-instructions. Others will use a type field of one bit to distinguish between machine instructions and pseudo-instructions, and then store a numeric opcode and field specification for the machine instructions or an address of code to execute for pseudo-instructions.
For simplicity, let us assume that we store, for each entry, the symbolic mnemonic, numeric opcode, default field specification and a type, as defined above. For pseudo-instructions, the numeric opcode and default field specification of the entry will be ignored. How should we organize our opcode table entries? The opcode table should be organized to minimize both search time and table space. These two goals may not be achievable at the same time. The fastest access to table entries would require that each field of an entry be in the same relative position of a memory word, such as in Figure 8.1. But notice that, in this case, three bytes of each entry are unused, so that the table includes a large amount of wasted space. (Actually, three bytes, two signs, and the upper three bits of the type byte are unused.) To save this space would require more code and longer execution times for packing and unpacking operations. Thus, to save time it seems wise to accept this wasted space in the opcode table.
To a great extent this wasted space is due to the design of the assembly language mnemonics. If the mnemonics had been defined as a maximum of three characters (instead of four), it would have been possible to store the mnemonic, field specifications, and opcode in one word. The sign bit would be "+"for machine instructions and "-" for pseudo-instructions. Pseudo instructions could have a type field or address in the lower bytes while machine instructions would store opcode and field specifications. On the other hand, once the decision is made that four character mnemonics are needed, then it is necessary to go to a multi-word opcode table entry. In this case, we could allow mnemonics of up to seven characters, by using the currently unused three bytes of the opcode table entry. On the other hand, there may be no desire to have opcode mnemonics of more than four characters; mnemonics should be short.
Another consideration is the order of entries in the table. This relates to the expected search time to find a particular entry in the table. The simplest search is a linear search starting at one end of the table and working towards the other end until a match is found (or the end of the table is reached). In this case, one should organize the table so that the more common mnemonics will be compared first. If, as expected, the LDA mnemonic is the most common instruction, then it should be put at the end of the table which is searched first.
Rather than use a linear search, it can be noted that the opcode table is a static structure; it does not change. The opcode table is like a dictionary explaining the numeric meaning of symbolic opcodes. Like a dictionary, it can be usefully organized in an alphabetical order. By ordering the entries, the table can be searched with a binary search. A binary search is the method commonly used to look for an entry in a dictionary, telephone book, or encyclopedia. First the middle of the book is compared with the entry being searched for (the search key). If the search key is smaller than the middle entry, then the key must be in the first half of the book, if it is larger it must be in the latter half of the book. After this comparison, the same idea can be repeated on the half of the book which still needs to be searched. Each comparison splits the remaining section of the book in half.
In MIX, this might be coded as follows. Location KEY contains the value for which we are searching. LOW contains our low index, while HIGH contains our high index. Initially, LOW would point to the first entry of the table; HIGH would point to the last entry. The search loop would then be
A binary search is not always the best search method to use. Each time through, the search loop cuts the size of the table yet to be searched in half. In general, a table of size 2n will take about n comparisons to find an entry. Thus, a table of size 32 will take only 5 comparisons, while for a linear search, it is normally assumed, that on the average, half of the table must be searched, resulting in 16 comparisons. Thus, the binary search almost always requires fewer executions of its search loop than a linear search. However, notice that the binary search requires considerably more computation per comparison than a linear search. The binary search loop requires about 35 time units per comparison while a linear search can require only 5 time units. Thus, for a table of size 32, the binary search takes 5 comparisons at 35 time units each, for 175 time units, while the linear search takes 16 comparisons at 5 time units each for 80 time units. This is not to say that a binary search should never be used. For a table of size 128, a binary search will take about 245 (= 7 × 35) time units, while a linear search will take 320 (= 64 × 5). Thus, for a large table a binary search is better. Also, if a shift (SRB) could be used instead of the divide, the time per loop could be cut by 11 time units, making the binary search better. Since there are around 150 MIX opcodes, we use a binary search for the opcode table.
The opcode table is used to translate the assembler-defined symbols into their numeric equivalents; the symbol table is used to translate programmer-defined symbols into their numeric equivalents. Thus, these two tables can be quite alike. A symbol table, like an opcode table, is a table of many entries, one entry for each programmer-defined symbol. Each entry is composed of several fields.
For the symbol table, only two fields are basically needed. It is necessary to store the symbol and the value of the symbol. A symbol in a MIXAL program can be up to 10 characters in length. This requires two MIX words. In addition, the value of the symbol can be up to five bytes plus sign, requiring another MIX word. Thus, each entry in the symbol table takes at least three words. Additional fields may be added for some assemblers. A bit may be needed to indicate if the symbol has been defined or is undefined (i.e., a forward reference). Another bit may specify whether the symbol is an absolute symbol or a relative symbol (depending on whether the output is to be used with a relocatable or absolute loader). Other fields may also be included in a symbol table entry, but for the moment let us use only the two fields, for the symbol and its value.
As with the opcode table, the organization of the symbol table is very important for proper use. But the symbol table differs from the opcode table in one important respect: it is dynamic. The opcode table was static; it is never changed, neither during nor between executions of the assembler. It is the same for each and every assembly program for the entire assembly process. The symbol table is dynamic; each program has its own set of symbols with their own values and new symbols are added to the symbol table as the assembly process proceeds. Initially the symbol table is empty; no symbols have been defined. By the time assembly is completed, however, all symbols in the program have been entered into the symbol table.
This requires the definition of two subroutines to manipulate the symbol table: a search routine and an enter routine. The search routine searches the symbol table for a symbol (its search key) and returns its value (or an index into the table to its value). The enter subroutine puts a new symbol and its value into the table.
These two subroutines need to be designed together, since both of them affect the symbol table. A binary search might be quite efficient, but it requires that the table be kept ordered. This means that the enter subroutine would have to adjust the table for each new entry, so that the table was always sorted into the correct order. Thus, although a binary search might be quick, the combination of a binary search plus an ordered enter might be very expensive.
Also consider that a linear search is more efficient than a binary search for small tables. Many assembly language programs have less than 200 different symbols, and so a linear search may be quite reasonable. A linear search allows the enter routine to simply put any new symbol and its value at the end of the table. Thus, both the search and enter routines are simple.
Many other table management techniques have been considered and are used. Some of these are quite complex and useful only for special cases. Others have wide applicability. One of the most commonly used techniques is hashing. The objective of hashing is quite simple. Rather than have to search for a symbol at all, we would prefer to be able to compute the location in the table of a symbol from the symbol itself. Then, to enter a symbol, we compute where it should go, and define that entry. For accessing, we compute the address of the entry and use the value stored in the table at that location.
As a simple example, assume that all of our symbols were one-letter symbols (A, B, C, …, Z). Then if we simply allocated a symbol table of 26 locations, we could index directly to a table entry for each symbol by using its character code for an index. No searching would be needed. If our symbols were any two-letter symbols, we could apply the same idea if we had a symbol table of 676 (= 26 × 26) entries, where our hash function would be to multiply the character code of the first letter by 26 and add the character code of the second (and subtract 26 to normalize). However, for three-letter symbols, we would need 17,576 spaces in our symbol table. This is clearly impossible. (Our MIX memory is only 4000 words long.) It also is not necessary, since we assume that at most only a few hundred symbols will be used in each different assembly program. What is needed is a function which produces an address for each different symbol, but maps them all into a table of several hundred words.
Many different hashing functions can be used. For example, we can add together the character codes for the different characters of the symbol, or multiply them, or shift some of them a few bits and exclusive-OR them with the other characters, or whatever we wish. Then, after we have hashed up the input characters to get some weird number, we can divide by the length of the table and use the remainder. The remainder is guaranteed to be between 0 and the length of the table and hence can be used as an index into the table. For a binary machine and a table whose length is a power of two, the division can be done by simply masking the low-order bits.
The objective of all this calculation is to arrive at an address for a symbol which hopefully is unique for each symbol. But since there are millions of 10-character symbols, and only a few hundred table entries, it must be the case that two different symbols may hash into the same table entry. For example, if we use a hash function which adds together the character codes of the letters in the symbol, then both EVIL and LIVE will hash to the same location. This is called a collision. The simplest solution to this problem is to then search through successive entries in the table, until an empty table entry is found. (If the end of the table is found, start over at the beginning).
The search and enter routines are now straightforward. To enter a new symbol, compute its hash function, and find an empty entry in the table. Enter the new symbol in this entry. To search for a symbol, compute its hash function and search the table starting at that entry. If an empty entry is found, the symbol is not in the table. Otherwise, it will be found before the first empty location.
The problem with using hashing is defining a good hash function. A good hash function will result in very few collisions. This means that both the search and enter routines will be very fast. In the worst case, all symbols will hash to the same location and a linear search will result. Hashing is sometimes also used for opcode tables, where, since the opcode table is static and known, a hashing function can be constructed which guarantees no collisions.
More about hashing can be found in the paper by Morris (1968), or in Knuth (1973), Volume 3.
The opcode table and the symbol table are the major data structures for an assembler, but not the only ones. The other data structures differ from assembler to assembler, depending upon the design of the assembly language and the assembler.
A buffer is probably needed to hold each input assembly language statement, and another buffer is needed to hold the line image corresponding to that statement for the listing of the program. In addition buffers may be needed to create the object module output for the loader, and to perform double buffering in order to maximize CPU-I/O overlap. Various variables are needed to count the number of cards read, the number of symbols in the symbol table, and so on.
One variable in particular is important. This is the location counter. The location counter is a variable which stores the address of the location into which the current instruction is to be loaded. The value of the location counter is used whenever the symbol "*" is used, and this value can be set by the ORIG pseudo-instruction. Normally the value of the location counter is increased by one after each instruction is assembled, in order to load the next instruction into the next location in memory. The value of the location counter for each instruction is used to instruct the loader where to load the instruction.
With a familiarity with the basic data structures of an assembler (the opcode table, the symbol table, and the location counter), we can now describe the general flow of an assembler. Each input assembly statement, each card, is handled separately, so our most general flow would be simply, "Process each card until an END pseudo-instruction is found."
More specifically, consider what kind of processing is needed for each card:
This much of the assembly process is common to all of the input cards. The important processing is in step 3, where each card is processed according to its type. This level of the assembler provides an organizational framework for further development.
Continuing then, what processing needs to be done for each type of opcode? Consider a machine language instruction. For a machine language instruction, we need to determine the contents of each of the four fields of a machine language instruction. In addition, we must define any label which is in the label field. Let us define the label first. Defining a label is simply a matter of entering the label (if any) in the label field of the assembly language statement into the symbol table with a value which is the value of the current location counter.
Now we need to determine the fields of the machine language instruction. The opcode field (byte 5:5) and the field specification (byte 4:4) are available in the opcode table entry which has already been found. To find the other fields (address and index), let us assume that we have a subroutine which will evaluate an expression. This subroutine will be written later. Then, to define the address field, we call our expression evaluator. The value it returns is the contents of the address field (bytes 0:2). When it returns, we check the next character. If the next character is a comma ",", then an index part follows and we call the expression evaluator again to evaluate the index part. If the next character is not a comma, then the index part is zero (default). When the index has been processed, we check the next character. If it is a left parenthesis, then the expression which follows is a field specification, and we call the expression evaluator again. Otherwise, we use the default specification from the opcode table.
Basically, we have defined an assembly language statement to be of the form
The processing for each pseudo-instruction is generally easy.
For an ORIG instruction, no code is generated. Rather, only two things need be done. First, if a label is present, it should be defined. Its value is the value of the location counter. Second, the expression in the operand field is evaluated and the value of the expression is stored as the new value of the location counter. We can use the same expression evaluator that is used for evaluation of the machine language operands.
The EQU pseudo-instruction is similarly straightforward. For an EQU instruction, we first evaluate the operand expression (using the expression evaluator as before), and then we enter the symbol in the label field into the symbol table with a value of the expression.
The ALF pseudo-instruction is even easier. In this case we first define the label of the instruction. Then we pick up the character code of the five characters of the ALF operand and issue them as a generated word, incrementing the location counter after we do so.
The most complicated pseudo-instruction is probably the CON instruction. The general form of this instruction is
Once the operand of the CON instruction has been generated, it can be output for loading and the location counter can be incremented.
The last pseudo-instruction the assembler will encounter is the END pseudo-instruction. For this pseudo-instruction, we first define the label. Then we evaluate the operand expression and output it to the loader as the starting address.
As you can see, each pseudo-instruction requires its own section of code for correct processing, and the pseudo-instruction processing for each is different. However, the processing for each individual pseudo-instruction, considered separately, is not overly complicated. The basic functions used in processing all assembly language instructions involve defining labels, evaluating expressions, and generating code. The first of these was discussed in Section 8.1, so now let us consider the evaluation of expressions.
An expression in MIXAL is composed of two basic elements: operands and operators. The operators are addition (+), subtraction (-), multiplication (*), division (/), and "multiplication by eight and addition" (:). The operands are of three types: numbers, symbols, and "*". Numbers have a value which is defined by the number itself, interpreted as a decimal number. Symbols are defined by the value field of their symbol table entry. The value of * is the value of the location counter.
The operators are applied to the operands strictly left to right, without precedence. This allows a very simple expression evaluation routine whose code is basically as follows.
And that is all that there is to it. Expressions for MIXAL have been defined in such a way that their evaluation is quite simple. Only one problem has been ignored. That problem is forward references.
The forward referencing problem arises in a very simple way: the expression evaluator attempts to evaluate an operand which is a symbol by searching the symbol table, and finds that the symbol is not in the symbol table. The symbol is not defined, the expression cannot be evaluated, and the assembly language statement cannot be assembled. What can be done?
One solution is to disallow forward references. MIXAL uses this approach in several places. No pseudo-instruction can make a forward reference. Forward references are not allowed in the index or field specification fields of an instruction. However, a restriction to no forward references would be extremely inconvenient and so two other solutions are more commonly used for allowing some forward references. These two solutions result in two classes of assemblers: one-pass assemblers and two-pass assemblers. Since two-pass assemblers are conceptually simpler, we consider them first.
A two-pass assembler makes two passes over the input program. That is, it reads the program twice. On the first pass, the assembler constructs the symbol table. On the second pass, the complete symbol table is used to allow expressions to be evaluated without problems due to forward references. If any symbol is not in the symbol table during an expression evaluation, it is an error, not a forward reference.
Briefly, for a two-pass assembler, the first pass constructs the symbol table; the second pass generates object code. This causes a major change in the basic flow of an assembler, and results in another important data structure: the intermediate text. Since two passes are being made over the input assembly language, it is necessary to save a copy of the program read for pass 1, to be used in pass two. Notice that since the assembly listing includes the generated machine language code, and the machine language code is not produced until pass 2, the assembly listing is not produced until pass 2. This requires storing the entire input program, including comments, and so forth, which are not needed by the assembler during pass 2.
The intermediate text can be stored in several ways. The best storage location would be in main memory. However, since MIX memory is so small, this technique is not possible on the MIX computer. On other machines, with more memory, this technique is sometimes used for very small programs. Another approach, used for very small machines, is to simply require the programmer to read his program into the computer twice, once for pass 1, and again for pass 2. A PDP-8 assembler has used this approach, even going so far as to require the program be read in a third time if a listing is desired.
A more common solution is to store the intermediate text on a secondary storage device, such as tape, drum, or disk. During pass 1, the original program is read in, and copied to secondary storage as the symbol table is constructed. Between passes, the device is repositioned (rewound for tapes, or the head moved back to the first track for moving head disk or drum). During pass 2, the program is read back from secondary storage for object code generation and printing a listing. (Notice that precautions should be taken that the same tape is not used for both storage of the intermediate text input for pass 2 and the storage of the object code produced as output from pass 2 for the loader.)
The general flow of a two-pass assembler differs from that given above, in that now each card must be processed twice, with different processing on each pass. It is also still necessary to process each type of card differently, depending upon its opcode type. This applies to both pass 1 and pass 2.
For machine instructions, pass 1 processing involves simply defining the label (if any) and incrementing the location counter by one. ALF and CON pseudo-instructions are handled in the same way. ORIG statements must be handled exactly as we described above, however, in order for the location counter to have the correct value for defining labels. Similarly, EQU statements must also be treated on pass 1. This means that no matter what approach is used, no forward references can be allowed in ORIG or EQU statements, since both are processed during pass 1. The END statement needs to have its label defined and then should jump to pass 2 for further processing.
During pass 2, EQU statements can be treated as comments. The label field can likewise be ignored. ALF, CON, and machine instructions will be processed as described above, as will the END statement.
The need to make two passes over the program can result in considerable duplication in code and computation during pass 1 and pass 2. For example, on both passes, we need to find the type of the opcode; on pass 1, to be able to treat ORIG, EQU, and END statements; on pass 2, for all types. This can result in having to search the opcode table twice for each assembly language statement. To prevent this, we need only save the index into the opcode table of the opcode (once it is found during pass 1) with each instruction. Also, consider that since the operand "*" may be used in the expressions evaluated during pass 2, it is necessary to duplicate during pass 2 the efforts of pass 1 to define the location counter, unless we can simply store with each assembly language statement the value of the location counter for that statement.
These considerations can result in extending the definition of the intermediate text to be more than simply the input to pass 1. Each input assembly language statement can be a record containing (at least) the following fields.
Even more can be computed during pass 1. For example, ALF and CON statements can be completely processed during pass 1. The opcode field, field specification, and index field for a machine instruction can be easily computed during pass 1, and most of the time the address field, too, can be processed on pass 1. It is only the occasional forward reference which causes a second pass to be needed. Most two-pass assemblers store a partially digested form of the input source program as their intermediate text for pass 2.
Assemblers, like loaders, are almost always I/O-bound programs, since the time to read a card generally far exceeds the time to assemble it. Thus, requiring two passes means an assembler takes twice as long to execute as an assembler which only uses one pass.
It is these considerations which have given rise to one-pass assemblers. A one-pass assembler does everything in one pass through the program, much as described earlier. The only problem with a one-pass assembler is caused by forward references, of course. The solutions to this problem are the same as were presented earlier for one-pass relocatable linking loaders: Use-tables or Chaining.
In either case, Use-table or Chaining, it is not possible to completely generate the machine language statement which corresponds to an assembly language statement; the address field cannot always be defined. Thus, some later program must fix-up those instructions with forward reference. In a two-pass assembler, this program is pass 2. In a one-pass assembler, there is no second pass, and so the program which must fix-up the forward references is the loader. The loader resolves forward references for a one-pass assembler.
If Use-tables are used to solve the future reference problem, then the assembler keeps track of all forward references to each symbol. After the value of the symbol is defined, the assembler generates special loader instructions to tell the loader the addresses of all forward references to a symbol and its correct value. When the loader encounters these special instructions during loading, it will fix-up the address field of the forward reference instruction to have the correct value.
A variation of this same idea is to use chaining. With chaining, the entries in the Use-table are kept in the address fields of the instructions which forward reference symbols. Only the address of the most recent use must be kept. When a new forward reference is encountered, the address of the previous reference is used, and the address of the most recent reference is kept. When a symbol is defined which has been forward referenced (or at the end of the program), special instructions are again issued to the loader to fix-up the chains which have been produced.
Variations on this basic theme are possible also. For example, if the standard loader will not fix-up forward references, it would be possible for the assembler to generate some special instructions which would be executed first, after loading, but before the assembled program is executed to fix-up forward references. But the basic idea remains the same: a one-pass assembler generates its object code for the loader in such a way that the loader, or the loaded program itself, will fix-up forward references.
Throughout the discussion of the assembly process, so far, we have ignored many of the (relatively) minor points concerning the writing of an assembler. One of the most visible of these considerations is the assembly listing. The assembly listing gives, for each input assembly line, the correspondingly generated machine language code. Notice, however, that not all assembly language statements result in the generation of machine language code, and not all code is meant to be interpreted in the same way (some are instructions, others numbers or character code).
For a machine language statement, useful information which can be listed includes the card image, card number, location counter, and generated code, broken down by field. For an ORIG, EQU, or END statement, on the other hand, no code is generated, but the value of the operand expression would be of interest. A CON or ALF statement would need to list the generated word, but as a number, not broken down by fields, as an instruction. Forward references in a one-pass assembler might be specially marked, as might a relocatable address field in a relocatable assembly program.
In addition to the listing of the input assembly, some assemblers also print a copy of their symbol table after it is completed. This symbol table listing would include the defined symbols, their values and perhaps the input card number where they were defined. Some assemblers will also produce a cross reference listing, which is a listing, for each symbol, of the card numbers or memory locations in the program of all references to that symbol. Most symbol table listings and cross reference listings are ordered alphabetically by symbol to aid in finding a particular symbol.
Another concern is the problem of errors. Many programs have assembler errors in them. These are errors not in what the program does, but in its form. These can be caused by mistakes in writing the program, or in the transcription of the program into a machine-readable form, like keypunching. The assembler must check for errors at all times. Typical errors include, multiply-defined symbols (using the same symbol in the label field twice), undefined symbols (using a symbol in the operand field of a statement, but never defining it in the label field), undefined opcodes (opcode cannot be found in the opcode table), illegal use of a forward reference, an illegal value for a field (index or field specification value not between 0 and 63, or address expression not between -4095 and +4095), and so on.
For each possible error, two things must be decided: (1) how to notify the programmer of the error, and (2) what to do next. The notification is often simplest. An error symbol or error number is associated with each type of error. The degree of specification of the exact error varies. One approach is to simply declare "error in assembly" or "error in xxxx part", where xxxx may be replaced by "label", "opcode", or "operand." However, these approaches may not give the programmer enough information to find and correct the error. The opposite approach is also sometimes taken, over-specifying the error to the extent that the user's manual lists thousands of errors, most of which are equivalent (from the point of view of the programmer), and with each error highly unlikely.
More typically, the major errors are classified and identified, while more obscure and unlikely errors are grouped into a category such as "syntax error in operand part." The listing line format may include a field in which errors are flagged, or a statement with an error may be followed by an error message.
In any case, the writer of the assembler must check for all possible errors in the input assembly program. If one is found, a decision must be made as to what to do next. This often may depend upon the type of error which is found, and the error handling code for each error is generally different. For a multiply-defined label, the second definition is often ignored, or the latest definition may always be used. An undefined opcode may be assumed to be a comment card, or treated like a HLT or NOP instruction. Illegal values for any of the fields of an instruction can result in default values being used.
Whatever the specific approach taken to a specific error, a general approach must also be taken. From the way in which an assembly language program is written, each input statement is basically a separate item to be translated. Thus, an error in one statement still allows the assembler to continue its assembly of the remaining program by simply ignoring the statement which is in error. This is in contrast to some systems which cease all operations whenever the first error is found. An assembler can always continue with the next card, after an error is found in the current card.
A more subtle point is whether or not an assembler should attempt to continue assembling the card in which the error occurs. If the opcode is undefined, the assembler may misinterpret the label field or operand field, causing it to appear that there are more (or fewer) errors than exist, once the incorrect opcode is corrected. (For example, a typing error on an ALF card may result in an undefined opcode. If the operand field of the incorrect ALF is treated as if the opcode were a machine instruction, then the character sequence for the ALF may result in an apparent reference to an undefined symbol.) On the other hand, attempting to continue assembling a card after an error may identify additional errors which would save the programmer several extra assemblies to find.
The discussion of loaders and linkers in Chapter 7 mentioned several differences between an assembly language which is used with a relocatable loader and an assembly language for an absolute loader. These differences show up in the assembly language in terms of the pseudo-instructions available and also in restrictions on the types of expressions (absolute or relocatable) which can be used.
It should be relatively obvious that the changes in the assembler for absolute and relocatable programs are not major changes, but consist mainly in the writing of some new code to handle the new pseudo-instructions and some changes in code generation format to match the input expected by a relocatable loader. Extra tables may be required for listing entry points and external symbols, and a type field will need to be added to the symbol table to distinguish between absolute symbols, relocatable symbols, entry points, and external symbols.
Another type of assembler, in addition to relocatable versus absolute and one-pass versus two-pass, is the load-and-go assembler. These assemblers are typically used on large computers for small programs (like programs written by students just learning the assembly language). The idea of a load-and-go assembler is to both assemble and load a program at the same time. The major problem with this, on most machines, including the MIX computer, is the size of memory and the size of the assembler. There is simply not enough room for both a program and the assembler in memory at the same time.
However, for a simple assembly language, giving rise to a simple and small assembler, and with a large memory, it is possible to write an assembler which, rather than generating loader code, loads the program directly into memory as it is assembled. The assembler acts as both an assembler and a loader at the same time. These load-and-go systems often are one-pass, absolute assemblers, but can just as well be two-pass and/or relocatable assemblers.
Where they are possible, load-and-go systems are generally significantly faster than non-load-and-go systems, since they save on I/O.
To demonstrate the techniques which have been discussed in this Chapter, we present here the actual code for a one-pass absolute assembler for MIXAL. The assembler accepts a MIXAL program from the card reader. It produces a listing on the line printer, and object code similar to that accepted by the absolute loader of Section 7.1 (with modifications for fix-ups).
The assembler is presented from the bottom up. That is, the simpler routines, which do not call other routines, are written first. Then we write routines which may call these routines, and so forth until the main program is written. This order of writing the assembler is possible mainly because the discussion in Section 8.2 has shown which routines we need for the assembler and how they fit together.
Each routine is written to be an independent function, as much as possible. Registers should be saved and restored by each routine as needed, with the exception of registers I5 and I6. Register I6 is used as the location counter of the assembly program. Register I5 is a column indicator for the lexical scan portion of the assembler.
Since the loader output is being written to tape, but is produced one word at a time by the assembler, several routines are needed to handle the tape output. The first two routines TAPEOUT and FINISHBUF are standard blocking routines such as discussed in Chapter 5. TAPEOUT accepts as a parameter one word, in the A register. These words are stored into a buffer of 100 words (one physical record) until the buffer is full. When the buffer is full, TAPEOUT calls FINISHBUF, which simply outputs the current buffer and resets the buffer pointers to allow double buffering. FINISHBUF can also be called to empty the last, partially filled buffer before the assembler halts.
Variables for these routines would include the two buffers (BUF1 and BUF2), pointers to these two buffers for double buffering, and a counter of the number of words in the buffer.
The TAPEOUT routine is called mainly by the routines which create the loader output. These routines are mainly concerned with formatting the output from the assembler. The loader output is a series of logical records, as described in Chapter 7. For a one-pass absolute loader, we need three types of loader records: (a) words to be stored in memory, (b) chain addresses for forward reference fix-up, and (c) the start address of the assembled program. Each block is identified by a header word of the format shown in Figure 8.5.
Byte 3 (T) is a type byte which is used to indicate the type of information in N (bytes 4:5) and LA (bytes 0:2). They can have only a value of 0, 1, or 2, as follows,
Most of the output to the tape will be of type 0. These words are produced by the assembler one at a time and need to be blocked into N word groups with a header in front and a checksum following. Notice that the header cannot be output until the value of N is known. So words are stored in a buffer (LDRBLOCK) until a discontinuity in load addresses occurs, or the buffer is full. Then the header and checksum are attached and the entire block written to tape by use of the routine TAPEOUT. The variables and code for this are
Two routines are used; one (GENERATE) puts the words into the loader record block (LDRBLOCK), while the other (FINISHBLCK) will empty the buffer, output it to tape, and compute the checksum.
Subroutine READCARD will read one card and unpack it into a one-character-per-word card image form. Double buffering is used.
GETSYM is the main lexical scan routine. Using index register I5 as a pointer to the card image in CARD, GETSYM packs the next symbol into SYM. SYM is 2 words, to allow up to 10 characters per symbol. A symbol is delimited by any non-alphabetic, non-numeric character. If the current character is non-alphanumeric, then SYM will be blank. The variable LETTER is used to indicate if the symbol is strictly numeric (LETTER zero) or contains an alphabetic character (LETTER non-zero). Numeric symbols are right-justified (to allow NUMing), while symbols with letters are left-justified.
GETSYM is used by GETFIELDS (among others). GETFIELDS gets the label and opcode fields of an assembly language program for later processing. This MIXAL assembler accepts only fixed-format input, and GETFIELDS reflects this. Notice that the change to a free-format assembler would require only that this one subroutine need be changed.
Both the opcode table and symbol table need search routines, and the symbol table needs an enter routine.
Each entry of the opcode table is two words. The symbolic opcode is in bytes 1:4 of the first word. Byte 5:5 is a type field. For machine instructions, word 2 has the default field and the numeric opcode in bytes 4 and 5, while word 2 is ignored for pseudo-instructions.
Each entry of the symbol table takes four words. The name of the symbol is stored in the first two words, and the value in the third word. If there were any forward references to this symbol, the fourth word contains the address of the last forward reference in bytes 4:5. Forward references are chained through this address.
The sign bit of the first word (D) is used to indicate if the symbol has been defined (D = +) or undefined (D = -). A symbol is undefined only if there has been a (forward) reference to it but it has not yet appeared in the label field.
The opcode table search routine uses a binary search on the ordered opcode table. If we search with a binary search, we need the opcode table to be sorted alphabetically. If the opcode is not found, an error is flagged and a HLT instruction is assumed as the opcode. The division by two in the binary search could be replaced by a right shift by one bit if a binary MIX machine is used.
A linear search is used for the symbol table. This is not the best search possible, but it is relatively simple. Since the search is done only in this one routine, it can be changed later, if we find a linear search to be too slow. The symbol definition routine simply adds symbols to the end of the table.
Expression evaluation for the MIXAL language is relatively simple since it involves only three types of operands (symbols, numbers, and *) and evaluation of operators is strictly left to right. After checking first for a *, the EVALSYM routine uses GETSYM to get the next symbol from the input card. If the symbol has no letters (LETTER = 0), it is numeric and is converted from character code to a numeric value by the NUM instruction. If the symbol has letters, then the symbol table is searched and the value of the symbol in the symbol table is used.
The evaluation of a symbol is actually more complicated due to forward references. Two special cases may arise in the symbol table search. First, the symbol may not be there. In this case, it must be entered into the table (using DEFINESYM). Then it can be treated like the second case, which involves symbols in the table, but not defined. These are symbols which have previously appeared as forward references. They are distinguished by a "-" sign in the sign bit of the first word of the symbol table entry. In this case, the "value" of the symbol is the previous reference address and the reference address is updated to be this instruction. Since forward references are not always allowed, the undefined nature of the symbol is noted by setting the variable UNDEFSYM to non-zero.
EVALSYM is used by EXPRESSION to evaluate the components of an expression. EXPRESSION evaluates the first component (using EVALSYM) and then examines the next character. If it is a delimiter, the evaluation stops; if it is an operator, the evaluation continues. This decision is made by the use of an array (OPERATOR) which is indexed by the character code of the next character. If the value is zero, the character is a delimiter. If it is non-zero, then the character is an operator and the value is in the range 1 to 5, to be used in a jump table for interpreting that operator. Thus, since the character code for "+" is 44, OPERATOR+44 is 1, OPERATOR+45 is 2 ("-"), OPERATOR+46 is 3 ("*"), OPERATOR+47 is 4 ("/"), and OPERATOR+54 is 5 (":"). All other values are zero.
If the character following an evaluated symbol is an operator, EVALSYM is called again and the two values are combined according to the operator. This process of finding operators, calling EVALSYM to evaluate the symbol, and combining its value with the previously computed value, continues until a delimiter is found.
Several problems must be attended to. Forward references are only allowed when NOFORWARD is zero. UNDEFSYM is used to tell when a symbol is a forward reference. Overflow must also be checked and flagged as an error. This includes expressions which exceed the range which is appropriate for their intended use. (An expression for an index or field specification can only be in the range 0 to 63.) These upper and lower bounds are passed as a parameter to EXPRESSION. The address of the lower bound is passed in register I1; the upper bound follows at 1,1.
The print routine is relatively straightforward, although lengthy. Each input statement generates an output line in the listing. The most common format is the format of a machine instruction, which is
|Location counter||Generated instruction||Card image||Card number|
|+3516||+3514 00 05 30||STA ERRSAVEA||469|
For other types of assembly language statements, this format is not always appropriate. For the CON and ALF statements, the generated code is not normally interpreted as an instruction, so it could be better presented as a five-byte signed value. For the EQU statement, no code is generated, and no location counter can thus be meaningfully associated with the statement, but the operand expression should be printed. The ORIG and END statements likewise do not generate code, but their operand expressions, which are addresses (not five-byte values) should be printed. Thus, there are several different formats for output, depending upon the type of the opcode. These are encoded in the PRTFMT table.
The PRINTLINE routine formats the output line in LINE according to the entry in PRTFMT determined by OPTYPE. LINE is then packed into PRTBUF and printed. By positioning the CARD array in the LINE array, the copying from CARD to LINE is not needed for the card image. Since the location counter may have been changed by the time that the line is formatted and printed, a separate variable, PRINTLOC, is used to store the location to be printed on the output line.
On a binary machine, it is most useful to print the output in octal, not decimal. Since the CHAR instruction converts from numeric into decimal character code, a separate routine, OCTCHAR, is used to convert into octal character code. Notice that by simply changing the divide by 8 to a divide by 10, decimal output can be generated.
Any system program must check its input carefully for possible errors, and an assembler has many opportunities for errors. Thus, errors must be checked for continuously, throughout the program.
When an error is detected, it should be signalled to the programmer somehow. For this assembler, we have elected to place flags in the first five characters to indicate any errors. If the first five characters of an output line are blank, no errors were found. If errors are found, the type of error can be identified by the character in the first five columns.
|M||Multiply-defined label. This label has been previously defined.|
|L||Bad symbol or label. A symbol exceeds ten characters in length or the label is numeric.|
|U||Undefined opcode. The symbolic opcode in the opcode field cannot be found in the opcode table.|
|F||Illegal forward reference. A forward reference occurs in an expression or where it is not allowed (EQU, CON, ORIG or END, or in I or F field).|
|O||Expression overflow. The expression evaluation resulted in a value which exceeded one MIX word, or exceeded the allowable range of the expression.|
|S||Illegal syntax. The assembly language statement does not have correct syntax; it is not of the correct form.|
One additional routine which is useful is DEFINELAB. This routine uses DEFINESYM to define a label for an assembly language statement. It does not simply call DEFINESYM, however, but must first call SEARCHSYM to check for multiply-defined symbols, or symbols which have been forward referenced. The label is taken out of the variable LABEL, where it was put by GETFIELDS. The value of the label is in the A register.
With these subroutines to perform most of the processing, the main assembler code is now quite simple. The main loop is
Each opcode type has its own section of code to process each assembly language statement. For a machine opcode, this involves first defining the label (JMP DEFINELAB). Then the address expression is evaluated (JMP EXPRESSION) and saved in the 0:2 field of the word to be generated (VALUE). If the next character is a comma, an index field is evaluated; if the next character is a left parenthesis, a field specification is evaluated. Finally, the word is generated.
EQU statements are even simpler. The expression is evaluated, and the label defined to have this value.
ORIG statements are almost as simple as EQUs.
ALF statements are processed by simply picking up the five characters in columns 17 through 21 and packing them into VALUE.
The CON statement is perhaps the most complicated of the pseudo-instructions. It consists of evaluating the first expression, and checking for a field specification. If a field is given, the value is stored in that field. This repeats until no more expressions are found. The major complexity in the code comes from the necessity of checking that the field specification given is valid.
The last pseudo-instruction processed will be an END statement. First, any label is defined. Then the starting address is evaluated and an end-of-assembly flag is set.
With most of the assembler written, we can easily see what needs to be initialized. The first card input should be started, the loader tape should be rewound, and the location counter set to zero. All other variables were initialized by CON statements when they were declared.
Termination is more complicated. First, any unfinished loader block should be finished. Then the symbol table needs to be searched. Any undefined symbols are defined, and fix-up codes are output to the loader tape for symbols which were forward referenced. Finally, the start address is output and the last tape buffer written to tape.
The MIXAL assembler which we have presented in this chapter is a relatively simple one, despite its length. A number of improvements can be made. However, the basic structure of the assembler is such that most improvements can be made by only local modifications to a few subroutines. The entire assembler will not need to be rewritten.
These are just a few of the changes which can be, or should be made. One of the major evaluation criteria is the ability of a programmer, other than its author, to understand a program, and be able to correctly modify it.
Another evaluation criteria is performance. This can be measured in terms of either memory size or speed. For a 4000-word MIX memory, the assembler occupies about 1600 words for its code and opcode table. This leaves over half of memory for the symbol table, allowing the symbol table to hold a maximum of about 600 symbols. Reducing the size of a symbol table entry to 3 words would increase this to 800 symbols.
Since card input, listing output, and tape output are all double buffered, and overlap most of the computation, the speed of the assembler is bounded mainly by the speed of the I/O devices. The assembler took 80.9 seconds to assemble itself (1579 cards) on a 1-microsecond-per-time-unit MIX computer, with a 1200-card-per-minute card reader, and a 1200-line-per-minute line printer. Of this time, 76.1 seconds were spent waiting for I/O devices. This means that only 5.8 percent of the total execution time was needed for the assembly of the program, the remainder of the time is all I/O. Assemblers are typically very I/O bound programs.
This simple measurement means that it would be very difficult to significantly speed up the assembler. A number of minor modifications can be made to speed up the assembler (such as a better symbol table search algorithm, better use of registers, and not saving registers which are not needed). But the effect of these changes on the total processing time would be minimal at best, and hence they are probably not worth the bother.
The basic function of an assembler is to translate a program from assembly language into loader code for loading. The major data structures which assist in this translation are the opcode table, which is used to translate from symbolic opcode to numeric opcode, and the symbol table, which is used to translate programmer-defined symbols to their numeric values.
With these major data structures and subroutines to search and enter these tables, as necessary, to evaluate symbols, to evaluate expressions, to print lines, to handle errors, to buffer and block loader output, to read cards and get symbols, to initialize and terminate the assembler, the code for assembling each type of assembly language statement is relatively easy to write and understand.
The major problem for an assembler is forward references. These can be handled by either a two-pass assembler or a one-pass assembler. A one-pass assembler requires the loader to fix-up forward references.
Assemblers are a major topic for books on systems programming, and chapters on assemblers are included in Stone and Siewiorek (1975), Graham (1975), Hsiao (1975), and Donovan (1972). Gear (1974) and Ullman (1976) also discuss assemblers. The book by Barron (1969) has an extensive description of assemblers and how they work. For a look at the insides of a real assembler, try the Program Logic Manual for the assembler for the IBM 360 computers (IBM order number GY26-3716).
|
How to plot simple parabola using matplotlib in Python
In this tutorial, we are going to learn how to plot a parabola in Python. To show the plotting of the graph on digital systems(computers) we need some sort of functions and libraries. so, this is where we make use of the matplotlib module.
- First, we need to understand what exactly is the matplotlib.
- matplotlib is a Python library for data visualization.
- It creates 2d graphs and plots by using Python scripts.
- It is simple and basic that is, it will have the data and we compute the date into the computer memory. Now once the computer has drawn the data we can show it.
Plot a simple parabola using matplotlib in Python
To plot the graphs in Python we use the popular library called matplotlib.
It will have the same kind of classes and objects when we try to refer to the matplotlib module. That’s why we import matplotlib.pyplot where matplotlib is basic class and pyplot is a function of it.
we can integrate this by using numpy or pandas.
The below code will draw the simple parabola y=x**2(X square).
from matplotlib.pyplot import * from numpy import * x=linspace(-1,1,5000) y=x**2 plot(x,y) xlabel("x axis") ylabel("y axis") print(x) show()
First, we import the matplotlib library and also we can import numpy for linspace and other functions.
The function we used is linspace which has 3 parameters where first is initial value next final value and 3rd parameter is the total number of intervals.
Next, we use the plot function to plot the x, y coordinates. x label and y label are used to show the x-axis and y-axis.
Finally, we use the show() function to view the graph.
We can observe the intervals from -1 to 1 from the output using the print() function in the source code.
So, this how we got the simple parabola.
We can also plot another parabola equation. Let us take y2(y square)=x
from matplotlib.pyplot import * from numpy import * y=linspace(-1,1,5000) x=y**2 plot(x,y) show()
|
Braja Sorensen Team November 24, 2020 Worksheet
Al the worksheets are adjusted for the second grade students. Learning about fractions can be tons of fun for students of all ages.
Free printable fraction worksheets for 2nd grade. 1st grade 2nd grade 3rd grade 4th grade 5th grade 6th grade activities adult alphabet coloring flashcards math pre k science. These worksheets are appropriate for kindergarten, 1st grade, and 2nd grade. Yes, it’s a unit fraction.
But still you can change for most of the games the difficulty level. Reduce the proper fraction, improper fraction and mixed numbers to its lowest term. However, also students in other grade levels can benefit from doing these math worksheets.
Free grade 3 math worksheets. You will then have two choices. Two ways to print this free 2nd grade math educational worksheet:
The students will be asked to identify the fractions for the shaped in shape, and to shade in the shape for the given fraction. Free fractions worksheet #4 students practice the concept of fractions by reading fraction and showing fractions. Fractions worksheets for 2019 | educative printable.
All worksheets are printable pdf files. Free printable math worksheets for grade 3. Fraction worksheets for children to practice.suitable pdf printable math fractions worksheets for children in the following grades :
Easily download and print our 2nd grade math worksheets. Free printable fractions worksheets for 2nd grade teachers to print out as math homework or class room exercises. We’ve created some free fraction strip printables, and free fraction circle printables you can use to teach these important math concepts to your kiddos.
This will take you to the individual page of the worksheet. Fractions worksheets grade 3 free math worksheets school worksheets multiplication games shapes worksheets math resources year 2. These second grade fractions worksheets put your students' fraction skills to the test with word problems, graphing, adding and subtracting fractions, exercises with everyday objects, and more!
Introduction to fractions, fractions illustrated with circles. Help 1st graders reinforce their abcs, letters, beginning sounds, phonemic awareness, and more with all our alphabet games & worksheets. These fractions worksheets are a great resource for children in kindergarten 1st grade 2nd grade 3rd grade 4th grade and 5th grade.
Free printable fractions worksheets and telling fractions mathematics activities for 2nd grade students learn and practice different ways to say fractions, read start fractions and stop fractions and calculate fractions, free printable second grade fractions math worksheet to read the clock, analog and digital clocks grade 2 free fractions worksheets, word problems fractions worksheets, days. Third grade fraction worksheets and games. Use these fraction craft ideas and fun fraction activities with kindergarten, first grade, 2nd grade, 3rd grade, 4th grade, and 5th graders.
Teaching kids about fractions for kids is important! Based on the singaporean math curriculum for second graders, these math worksheets are made for students in grade level 2. Our grade 2 fraction worksheets introduce students to fractions as both parts of a whole and parts of a set.we cover identifying common fractions, comparing common fractions, and reading / writing fractions.
Free fraction worksheets 2nd grade math worksheets math worksheets for kindergarten math fractions worksheets printable math worksheets free printables measurement worksheets 2nd grade activities fraction games. Free printable worksheets and activities for 2nd grade in pdf. 3rd grade is a good time to start learning about fractions.
Click on the free 2nd grade math worksheet you would like to print or download. What type of fraction is 1/4? Fraction worksheets for grade 3 to printable.
Our grade 2 math worksheets are free and printable in pdf format. Worksheets > math > grade 2. Math, english, number, addition, subtraction, multiplication, science, grammar activity.
Choose your grade 2 topic: Welcome to the second grade math games worksheets. These worksheets will produce fraction representations with denominators of 2 through 12.
These printable fraction activities, games and fraction worksheets can help. Free grade 2 math worksheets. If you’re getting ready to teach fractions, then you need to have the right resources for the job.
Worksheets > math > grade 2 > fractions. Identify a proper fraction, improper fraction, mixed number, unit fraction, like fractions, unlike fractions like a pro with these printable types of fractions worksheets. Fraction action is a great worksheet to use when introducing fractions to beginners.
Our grade 2 math worksheets emphasize numeracy as well as a conceptual understanding of math concepts.all worksheets are printable pdf documents. Free 2nd grade math worksheets for teachers, parents, and kids. You will find here a large collection of free printable math game worksheets and math for grade 2.
Welcome to the third grade fraction worksheets. Visually adding simple fractions worksheets This is just one of those times that youngsters obtain captured up in the process of completing something for school and forget to look after several of the other essential things that they require to do.
Free fraction worksheets math fractions worksheets 2nd grade math worksheets printable math worksheets teacher worksheets learning fractions comparing fractions dividing fractions multiplying fractions. Free printable math worksheets 2nd grade fractions in puzzles for 5th decimal place addition games free printable math worksheets fractions worksheet free printable equivalent fractions worksheets free printable fraction coloring worksheets multiplying fractions free worksheets numbers worksheet fun fraction worksheets another problem with almost all worksheets is that they don’t prevent.
|
5 th Grade Science Fair 2011-2012. What is a science fair? J. Glenn Edwards will be hosting a science fair, and your child will be participating! Each.
Post on 04-Jan-2016
Embed Size (px)
5th Grade Science Fair
5th Grade Science Fair2011-2012What is a science fair? J. Glenn Edwards will be hosting a science fair, and your child will be participating! Each child will create a project that exemplifies all steps of the scientific method. The projects should be creative, neat, well organized, and clearly label the steps of the scientific method on a tri-fold poster board. Each step should have an explanation of what took place during that specific step.
Science Fair ScheduleDecember 16- Project Ideas dueFebruary 2- Projects due to school at 8AMFebruary 3-Parents Invited to view projects and awards ceremony
ProblemState the Problem or ask a question. What you think will happen.The scientific method starts when you ask a question about something that you observe: How, What, When, Who, Which, Why, or Where?
HypothesisMake a Hypothesis (prediction) of what you think will happen. A hypothesis is an educated guess about how things work:"If _____[I do this] _____, then _____[this]_____ will happen."
ExperimentDo the Experiment.
To test a hypothesis, scientists do an experiment.
ObservationMake Observations and record data.
Observations are only what you see, hear, or measure.
ConclusionState your Conclusion.
A conclusion is a summary that explains your data.
Finding a projecthttp://www.brainpop.com/science/scientificinquiry/scienceprojects/http://www.sciencebuddies.com/www.all-science-fair-projects.com/www.ipl.orghttp://school.discoveryeducation.com/sciencefaircentral/
Tri-fold Board DisplayTitle of ProjectProblemIdentify a problemWhat do you want to find out2. HypothesisMake an intelligent guessWhat do you think will happen3. MaterialsList materials you used4. ProceduresWhat were the steps you used to solve the problem?5. Data Collect Data from trials and test6. ResultsWhat happened when you did your experiment7. Conclusions Answer to your problemWhat did you learn from your experiment and how is it related to your life?
|
pythagorean theorem 4
Initial Post Instructions
One of the most famous formulas in mathematics is the Pythagorean Theorem. It is based on a right triangle,and states the relationship among the lengths of the sides as a2+ b2= c2, where a and b refer to the legs of a right triangle and c refers to the hypotenuse. It has immeasurable uses in engineering, architecture, science, geometry, trigonometry, algebra, and in everyday applications. For your first post, search online for an article or video that describes how the Pythagorean Theorem can be used in the real world. Provide a one paragraph summary of the article or video in your own words. Be sure you cite the article and provide the link.
Follow-Up Post Instructions
Respond to at least two peers in a substantive, content-specific way. Further the dialogue by providing more information and clarification.
|
In mathematics, infinitesimals are things so small that there is no way to measure them. The insight with exploiting infinitesimals was that entities could still retain certain specific properties, such as angle or slope, even though these entities were quantitatively small. The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinity-th" item in a sequence. Infinitesimals are a basic ingredient in the procedures of infinitesimal calculus as developed by Leibniz, including the law of continuity and the transcendental law of homogeneity. In common speech, an infinitesimal object is an object that is smaller than any feasible measurement, but not zero in size—or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective, "infinitesimal" means "extremely small". To give it a meaning, it usually must be compared to another infinitesimal object in the same context (as in a derivative). Infinitely many infinitesimals are summed to produce an integral.
The concept of infinitesimals was originally introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz. Archimedes used what eventually came to be known as the method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids. In his formal published treatises, Archimedes solved the same problem using the method of exhaustion. The 15th century saw the work of Nicholas of Cusa, further developed in the 17th century by Johannes Kepler, in particular calculation of area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin's work on decimal representation of all numbers in the 16th century prepared the ground for the real continuum. Bonaventura Cavalieri's method of indivisibles led to an extension of the results of the classical authors. The method of indivisibles related to geometrical figures as being composed of entities of codimension 1. John Wallis's infinitesimals differed from indivisibles in that he would decompose geometrical figures into infinitely thin building blocks of the same dimension as the figure, preparing the ground for general methods of the integral calculus. He exploited an infinitesimal denoted 1/∞ in area calculations.
The use of infinitesimals by Leibniz relied upon heuristic principles, such as the law of continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the transcendental law of homogeneity that specifies procedures for replacing expressions involving inassignable quantities, by expressions involving only assignable ones. The 18th century saw routine use of infinitesimals by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange. Augustin-Louis Cauchy exploited infinitesimals both in defining continuity in his Cours d'Analyse, and in defining an early form of a Dirac delta function. As Cantor and Dedekind were developing more abstract versions of Stevin's continuum, Paul du Bois-Reymond wrote a series of papers on infinitesimal-enriched continua based on growth rates of functions. Du Bois-Reymond's work inspired both Émile Borel and Thoralf Skolem. Borel explicitly linked du Bois-Reymond's work to Cauchy's work on rates of growth of infinitesimals. Skolem developed the first non-standard models of arithmetic in 1934. A mathematical implementation of both the law of continuity and infinitesimals was achieved by Abraham Robinson in 1961, who developed non-standard analysis based on earlier work by Edwin Hewitt in 1948 and Jerzy Łoś in 1955. The hyperreals implement an infinitesimal-enriched continuum and the transfer principle implements Leibniz's law of continuity. The standard part function implements Fermat's adequality.
Vladimir Arnold wrote in 1990:
- Nowadays, when teaching analysis, it is not very popular to talk about infinitesimal quantities. Consequently present-day students are not fully in command of this language. Nevertheless, it is still necessary to have command of it.
- 1 History of the infinitesimal
- 2 First-order properties
- 3 Number systems that include infinitesimals
- 4 Infinitesimal delta functions
- 5 Logical properties
- 6 Infinitesimals in teaching
- 7 Functions tending to zero
- 8 Array of random variables
- 9 See also
- 10 Notes
- 11 References
History of the infinitesimal
The notion of infinitely small quantities was discussed by the Eleatic School. The Greek mathematician Archimedes (c.287 BC–c.212 BC), in The Method of Mechanical Theorems, was the first to propose a logically rigorous definition of infinitesimals. His Archimedean property defines a number x as infinite if it satisfies the conditions |x|>1, |x|>1+1, |x|>1+1+1, ..., and infinitesimal if x≠0 and a similar set of conditions holds for x and the reciprocals of the positive integers. A number system is said to be Archimedean if it contains no infinite or infinitesimal members.
The English mathematician John Wallis introduced the expression 1/∞ in his 1655 book Treatise on the Conic Sections. The symbol, which denotes the reciprocal, or inverse, of ∞, is the symbolic representation of the mathematical concept of an infinitesimal. In his Treatise on the Conic Sections, Wallis also discusses the concept of a relationship between the symbolic representation of infinitesimal 1/∞ that he introduced and the concept of infinity for which he introduced the symbol ∞. The concept suggests a thought experiment of adding an infinite number of parallelograms of infinitesimal width to form a finite area. This concept was the predecessor to the modern method of integration used in integral calculus. The conceptual origins of the concept of the infinitesimal 1/∞ can be traced as far back as the Greek philosopher Zeno of Elea, whose Zeno's dichotomy paradox was the first mathematical concept to consider the relationship between a finite interval and an interval approaching that of an infinitesimal-sized interval.
Infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632.
Prior to the invention of calculus mathematicians were able to calculate tangent lines using Pierre de Fermat's method of adequality and René Descartes' method of normals. There is debate among scholars as to whether the method was infinitesimal or algebraic in nature. When Newton and Leibniz invented the calculus, they made use of infinitesimals, Newton's fluxions and Leibniz' differential. The use of infinitesimals was attacked as incorrect by Bishop Berkeley in his work The Analyst. Mathematicians, scientists, and engineers continued to use infinitesimals to produce correct results. In the second half of the nineteenth century, the calculus was reformulated by Augustin-Louis Cauchy, Bernard Bolzano, Karl Weierstrass, Cantor, Dedekind, and others using the (ε, δ)-definition of limit and set theory. While the followers of Cantor, Dedekind, and Weierstrass sought to rid analysis of infinitesimals, and their philosophical allies like Bertrand Russell and Rudolf Carnap declared that infinitesimals are pseudoconcepts, Hermann Cohen and his Marburg school of neo-Kantianism sought to develop a working logic of infinitesimals. The mathematical study of systems containing infinitesimals continued through the work of Levi-Civita, Giuseppe Veronese, Paul du Bois-Reymond, and others, throughout the late nineteenth and the twentieth centuries, as documented by Philip Ehrlich (2006). In the 20th century, it was found that infinitesimals could serve as a basis for calculus and analysis (see hyperreal numbers).
In extending the real numbers to include infinite and infinitesimal quantities, one typically wishes to be as conservative as possible by not changing any of their elementary properties. This guarantees that as many familiar results as possible are still available. Typically elementary means that there is no quantification over sets, but only over elements. This limitation allows statements of the form "for any number x..." For example, the axiom that states "for any number x, x + 0 = x" would still apply. The same is true for quantification over several numbers, e.g., "for any numbers x and y, xy = yx." However, statements of the form "for any set S of numbers ..." may not carry over. Logic with this limitation on quantification is referred to as first-order logic.
The resulting extended number system cannot agree with the reals on all properties that can be expressed by quantification over sets, because the goal is to construct a non-Archimedean system, and the Archimedean principle can be expressed by quantification over sets. One can conservatively extend any theory including reals, including set theory, to include infinitesimals, just by adding a countably infinite list of axioms that assert that a number is smaller than 1/2, 1/3, 1/4 and so on. Similarly, the completeness property cannot be expected to carry over, because the reals are the unique complete ordered field up to isomorphism.
We can distinguish three levels at which a nonarchimedean number system could have first-order properties compatible with those of the reals:
- An ordered field obeys all the usual axioms of the real number system that can be stated in first-order logic. For example, the commutativity axiom x + y = y + x holds.
- A real closed field has all the first-order properties of the real number system, regardless of whether they are usually taken as axiomatic, for statements involving the basic ordered-field relations +, ×, and ≤. This is a stronger condition than obeying the ordered-field axioms. More specifically, one includes additional first-order properties, such as the existence of a root for every odd-degree polynomial. For example, every number must have a cube root.
- The system could have all the first-order properties of the real number system for statements involving any relations (regardless of whether those relations can be expressed using +, ×, and ≤). For example, there would have to be a sine function that is well defined for infinite inputs; the same is true for every real function.
Systems in category 1, at the weak end of the spectrum, are relatively easy to construct, but do not allow a full treatment of classical analysis using infinitesimals in the spirit of Newton and Leibniz. For example, the transcendental functions are defined in terms of infinite limiting processes, and therefore there is typically no way to define them in first-order logic. Increasing the analytic strength of the system by passing to categories 2 and 3, we find that the flavor of the treatment tends to become less constructive, and it becomes more difficult to say anything concrete about the hierarchical structure of infinities and infinitesimals.
Number systems that include infinitesimals
An example from category 1 above is the field of Laurent series with a finite number of negative-power terms. For example, the Laurent series consisting only of the constant term 1 is identified with the real number 1, and the series with only the linear term x is thought of as the simplest infinitesimal, from which the other infinitesimals are constructed. Dictionary ordering is used, which is equivalent to considering higher powers of x as negligible compared to lower powers. David O. Tall refers to this system as the super-reals, not to be confused with the superreal number system of Dales and Woodin. Since a Taylor series evaluated with a Laurent series as its argument is still a Laurent series, the system can be used to do calculus on transcendental functions if they are analytic. These infinitesimals have different first-order properties than the reals because, for example, the basic infinitesimal x does not have a square root.
The Levi-Civita field
The Levi-Civita field is similar to the Laurent series, but is algebraically closed. For example, the basic infinitesimal x has a square root. This field is rich enough to allow a significant amount of analysis to be done, but its elements can still be represented on a computer in the same sense that real numbers can be represented in floating point.
where for purposes of ordering x is considered infinite.
Conway's surreal numbers fall into category 2. They are a system designed to be as rich as possible in different sizes of numbers, but not necessarily for convenience in doing analysis. Certain transcendental functions can be carried over to the surreals, including logarithms and exponentials, but most, e.g., the sine function, cannot. The existence of any particular surreal number, even one that has a direct counterpart in the reals, is not known a priori, and must be proved.[clarification needed]
The most widespread technique for handling infinitesimals is the hyperreals, developed by Abraham Robinson in the 1960s. They fall into category 3 above, having been designed that way so all of classical analysis can be carried over from the reals. This property of being able to carry over all relations in a natural way is known as the transfer principle, proved by Jerzy Łoś in 1955. For example, the transcendental function sin has a natural counterpart *sin that takes a hyperreal input and gives a hyperreal output, and similarly the set of natural numbers has a natural counterpart , which contains both finite and infinite integers. A proposition such as carries over to the hyperreals as .
In linear algebra, the dual numbers extend the reals by adjoining one infinitesimal, the new element ε with the property ε2 = 0 (that is, ε is nilpotent). Every dual number has the form z = a + bε with a and b being uniquely determined real numbers.
Smooth infinitesimal analysis
Synthetic differential geometry or smooth infinitesimal analysis have roots in category theory. This approach departs from the classical logic used in conventional mathematics by denying the general applicability of the law of excluded middle – i.e., not (a ≠ b) does not have to mean a = b. A nilsquare or nilpotent infinitesimal can then be defined. This is a number x where x2 = 0 is true, but x = 0 need not be true at the same time. Since the background logic is intuitionistic logic, it is not immediately clear how to classify this system with regard to classes 1, 2, and 3. Intuitionistic analogues of these classes would have to be developed first.
Infinitesimal delta functions
Cauchy used an infinitesimal to write down a unit impulse, infinitely tall and narrow Dirac-type delta function satisfying in a number of articles in 1827, see Laugwitz (1989). Cauchy defined an infinitesimal in 1821 (Cours d'Analyse) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a suitable ultrafilter. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals.
The method of constructing infinitesimals of the kind used in nonstandard analysis depends on the model and which collection of axioms are used. We consider here systems where infinitesimals can be shown to exist.
In 1936 Maltsev proved the compactness theorem. This theorem is fundamental for the existence of infinitesimals as it proves that it is possible to formalise them. A consequence of this theorem is that if there is a number system in which it is true that for any positive integer n there is a positive number x such that 0 < x < 1/n, then there exists an extension of that number system in which it is true that there exists a positive number x such that for any positive integer n we have 0 < x < 1/n. The possibility to switch "for any" and "there exists" is crucial. The first statement is true in the real numbers as given in ZFC set theory : for any positive integer n it is possible to find a real number between 1/n and zero, but this real number depends on n. Here, one chooses n first, then one finds the corresponding x. In the second expression, the statement says that there is an x (at least one), chosen first, which is between 0 and 1/n for any n. In this case x is infinitesimal. This is not true in the real numbers (R) given by ZFC. Nonetheless, the theorem proves that there is a model (a number system) in which this is true. The question is: what is this model? What are its properties? Is there only one such model?
- 1) Extend the number system so that it contains more numbers than the real numbers.
- 2) Extend the axioms (or extend the language) so that the distinction between the infinitesimals and non-infinitesimals can be made in the real numbers themselves.
In 1960, Abraham Robinson provided an answer following the first approach. The extended set is called the hyperreals and contains numbers less in absolute value than any positive real number. The method may be considered relatively complex but it does prove that infinitesimals exist in the universe of ZFC set theory. The real numbers are called standard numbers and the new non-real hyperreals are called nonstandard.
In 1977 Edward Nelson provided an answer following the second approach. The extended axioms are IST, which stands either for Internal set theory or for the initials of the three extra axioms: Idealization, Standardization, Transfer. In this system we consider that the language is extended in such a way that we can express facts about infinitesimals. The real numbers are either standard or nonstandard. An infinitesimal is a nonstandard real number that is less, in absolute value, than any positive standard real number.
In 2006 Karel Hrbacek developed an extension of Nelson's approach in which the real numbers are stratified in (infinitely) many levels; i.e., in the coarsest level there are no infinitesimals nor unlimited numbers. Infinitesimals are in a finer level and there are also infinitesimals with respect to this new level and so on.
Infinitesimals in teaching
Calculus textbooks based on infinitesimals include the classic Calculus Made Easy by Silvanus P. Thompson (bearing the motto "What one fool can do another can") and the German text Mathematik fur Mittlere Technische Fachschulen der Maschinenindustrie by R. Neuendorff. Pioneering works based on Abraham Robinson's infinitesimals include texts by Stroyan (dating from 1972) and Howard Jerome Keisler (Elementary Calculus: An Infinitesimal Approach). Students easily relate to the intuitive notion of an infinitesimal difference 1-"0.999...", where "0.999..." differs from its standard meaning as the real number 1, and is reinterpreted as an infinite terminating extended decimal that is strictly less than 1.
Another elementary calculus text that uses the theory of infinitesimals as developed by Robinson is Infinitesimal Calculus by Henle and Kleinberg, originally published in 1979. The authors introduce the language of first order logic, and demonstrate the construction of a first order model of the hyperreal numbers. The text provides an introduction to the basics of integral and differential calculus in one dimension, including sequences and series of functions. In an Appendix, they also treat the extension of their model to the hyperhyperreals, and demonstrate some applications for the extended model.
Functions tending to zero
In a related but somewhat different sense, which evolved from the original definition of "infinitesimal" as an infinitely small quantity, the term has also been used to refer to a function tending to zero. More precisely, Loomis and Sternberg's Advanced Calculus defines the function class of infinitesimals, , as a subset of functions between normed vector spaces by
as well as two related classes (see Big-O notation) by
The set inclusions generally hold. That the inclusions are proper is demonstrated by the real-valued functions of a real variable , , and :
but and .
As an application of these definitions, a mapping between normed vector spaces is defined to be differentiable at if there is a [i.e, a bounded linear map ] such that
in a neighborhood of . If such a map exists, it is unique; this map is called the differential and is denoted by , coinciding with the traditional notation for the classical (though logically flawed) notion of a differential as an infinitely small "piece" of F. This definition represents a generalization of the usual definition of differentiability for vector-valued functions of (open subsets of) Euclidean spaces.
Array of random variables
The notion of infinitesimal array is essential in some central limit theorems and it is easily seen by monotonicity of the expectation operator that any array satisfying Lindeberg's condition is infinitesimal, thus playing an important role in Lindeberg's Central Limit Theorem (a generalization of the central limit theorem).
- Bell, John L. (6 September 2013). "Continuity and Infinitesimals". Stanford Encyclopedia of Philosophy.
- Katz, Mikhail G.; Sherry, David (2012), "Leibniz's Infinitesimals: Their Fictionality, Their Modern Implementations, and Their Foes from Berkeley to Russell and Beyond", Erkenntnis, 78 (3): 571–625, arXiv:1205.0174, doi:10.1007/s10670-012-9370-y
- Reviel, Netz; Saito, Ken; Tchernetska, Natalie (2001). "A New Reading of Method Proposition 14: Preliminary Evidence from the Archimedes Palimpsest (Part 1)". SCIAMVS. 2: 9–29.
- Arnolʹd, V. I. Huygens and Barrow, Newton and Hooke. Pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals. Translated from the Russian by Eric J. F. Primrose. Birkhäuser Verlag, Basel, 1990. p. 27
- Archimedes, The Method of Mechanical Theorems; see Archimedes Palimpsest
- Alexander, Amir (2014). Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. Scientific American / Farrar, Straus and Giroux. ISBN 978-0-374-17681-5.
- Berkeley, George (1734). The Analyst: A Discourse Addressed to an Infidel Mathematician. London.
- Mormann, Thomas; Katz, Mikhail (Fall 2013). "Infinitesimals as an Issue of Neo-Kantian Philosophy of Science". HOPOS: The Journal of the International Society for the History of Philosophy of Science. 3 (2): 236–280. arXiv:1304.1027. doi:10.1086/671348. JSTOR 10.1086/671348.
- "Infinitesimals in Modern Mathematics". Jonhoyle.com. Archived from the original on 2011-07-13. Retrieved 2011-03-11. Cite uses deprecated parameter
- Shamseddine, Khodr. "Analysis on the Levi-Civita Field, a Brief Overview" (PDF). Archived from the original (PDF) on 2011-06-08. Cite uses deprecated parameter
- Edgar, Gerald A. (2010). "Transseries for Beginners". Real Analysis Exchange. 35: 253–310. arXiv:0801.4877v5.
- Thompson, Silvanus P. (1914). Calculus Made Easy (Second ed.). New York: The Macmillan Company.
- R Neuendorff (1912) Lehrbuch der Mathematik fur Mittlere Technische Fachschulen der Maschinenindustrie, Verlag Julius Springer, Berlin.
- Ely, Robert (2010). "Nonstandard student conceptions about infinitesimals" (PDF). Journal for Research in Mathematics Education. 41 (2): 117–146. JSTOR 20720128. Archived (PDF) from the original on 2019-05-06. Cite uses deprecated parameter
- Katz, Karin Usadi; Katz, Mikhail G. (2010). "When is .999... less than1?" (PDF). The Montana Mathematics Enthusiast. 7 (1): 3–30. arXiv:1007.3018. ISSN 1551-3440. Archived from the original (PDF) on 2012-12-07. Retrieved 2012-12-07. Cite uses deprecated parameter
- Henle, James M.; Kleinberg, Eugene (1979). Infinitesimal Calculus. The MIT Press, rereleased by Dover. ISBN 978-0-262-08097-2.
- Loomis, Lynn Harold; Sternberg, Shlomo (2014). Advanced Calculus. Hackensack, N.J.: World Scientific. pp. 138–142. ISBN 978-981-4583-92-3.
- This notation is not to be confused with the many other distinct usages of d in calculus that are all loosely related to the classical notion of the differential as "taking an infinitesimally small piece of something": (1) in the expression, indicates Riemann-Stieltjes integration with respect to the integrator function ; (2) in the expression , symbolizes Lebesgue integration with respect to a measure ; (3) in the expression , dV indicates integration with respect to volume; (4) in the expression , the letter d represents the exterior derivative operator, and so on....
- Barczyk, Adam; Janssen, Arnold; Pauly, Markus (2011). "The Asymptotics of L-statistics for non-i.i.d. variables with heavy tails" (PDF). Probability and Mathematical Statistics. 31 (2): 285–299. Archived (PDF) from the original on 2019-08-21. Cite uses deprecated parameter
- B. Crowell, "Calculus" (2003)
- Ehrlich, P. (2006) The rise of non-Archimedean mathematics and the roots of a misconception. I. The emergence of non-Archimedean systems of magnitudes. Arch. Hist. Exact Sci. 60, no. 1, 1–121.
- Malet, Antoni. "Barrow, Wallis, and the remaking of seventeenth century indivisibles". Centaurus 39 (1997), no. 1, 67–92.
- J. Keisler, "Elementary Calculus" (2000) University of Wisconsin
- K. Stroyan "Foundations of Infinitesimal Calculus" (1993)
- Stroyan, K. D.; Luxemburg, W. A. J. Introduction to the theory of infinitesimals. Pure and Applied Mathematics, No. 72. Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1976.
- Robert Goldblatt (1998) "Lectures on the hyperreals" Springer.
- Cutland et al. "Nonstandard Methods and Applications in Mathematics" (2007) Lecture Notes in Logic 25, Association for Symbolic Logic.
- "The Strength of Nonstandard Analysis" (2007) Springer.
- Laugwitz, D. (1989). "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around 1820". Archive for History of Exact Sciences. 39 (3): 195–245. doi:10.1007/BF00329867.
- Yamashita, H.: Comment on: "Pointwise analysis of scalar Fields: a nonstandard approach" [J. Math. Phys. 47 (2006), no. 9, 092301; 16 pp.]. J. Math. Phys. 48 (2007), no. 8, 084101, 1 page.
|
Table of Contents :
Top Suggestions Alternate And Corresponding Angles Worksheet :
Alternate And Corresponding Angles Worksheet Parallel lines when a pair of parallel lines is crossed or intersected by another line it creates pairs of angles with special finally try this worksheet from pearson to practise what This two page worksheet teaches that parallel lines cut by a transversal will create multiple pairs of congruent angles after reviewing examples of corresponding angles alternate interior angles For the last week of term revisit alternate corresponding and co interior angles and solving angle finally try this worksheet from pearson to practise what you ve learned in this lesson.
Alternate And Corresponding Angles Worksheet This two page worksheet teaches that parallel lines cut by a transversal will create multiple pairs of congruent angles after reviewing examples of corresponding angles alternate interior angles.
Alternate And Corresponding Angles Worksheet
Corresponding Angles And Alternate Angles Worksheets And
Corresponding And Alternate Angles 4 Simple Rules Corresponding And Alternate Angles Are Formed When A Straight Line Passes Through Two Parallel Lines Parallel Means That Two Lines Are Always The Same Distance Away From Each Other And Therefore Will Never Meetrallel Lines Are Marked With Matching Arrows As Shown In The Examples Below
Identify Corresponding And Alternate Angles Worksh
Created Oct 28 These Worksheets Are Designed To Help Students Identify Corresponding And Alternate Angles They Were Designed For A Lower Ability Ks3 Group For The First Worksheet Students Should Also Draw The Z Or F
Corresponding And Alternative Angles Worksheets Teacher
Showing Top 8 Worksheets In The Category Corresponding And Alternative Angles Some Of The Worksheets Displayed Are Corresponding Angles Work Find The Missing Angles Alternate Angles Work Write The Missing Alternate Identify The Angle Pair As Either Corresponding Angles Name Date Congruence Geometrical Theorems 3 Parallel Lines And Transversals Identifying Angles 1 Directions
Corresponding Angles Worksheet Find The Missing Angles
Student Name Score Free Math Worksheets Thworksheets4kids
Alternate Angles Worksheets New Engaging Cazoomy
Alternate Angles Worksheet 3 Contains Questions For Year 7 Working At Grade 2 And Alternate Angles Worksheet 5 Contains Questions At Grade 4 Targeting Year 9 Alternate Angles On Parallel Lines Alternate Angles Are Also Known As Z Angles Because The Shape Formed Between Parallel Lines Is A Z ShapeAngles Worksheets Parallel Lines Cut By A Transversal
Traverse Through This Array Of Free Printable Worksheets To Learn The Major Outcomes Of Angles Formed By Parallel Lines Cut By A Transversal The Topic Mainly Focuses On Concepts Like Alternate Angles Same Side Angles And Corresponding Angles Equipped With Free Worksheets On Identifying The Angle Relationships Finding The Measures Of Interior And Exterior Angles Determining Whether The Given Pairs Of Angles Are Supplementary Or Congruent And More This Set Is A Must Have For YourCorresponding Alternate And Co Interior Angles 7
Interest Worksheets World Pandemic Plan Year 8 Year 8 Trigonometry Worksheets World Data Collection Project Year 8 Victorian Curriculum Scope And Sequence Top Posts Pages Calculating Mean Median Mode And Range From A Frequency Table 7 Corresponding Alternate And Co Interior Angles 7Angles Formed By A Transversal Worksheets
Traverse Through This Huge Assortment Of Transversal Worksheets To Acquaint 7th Grade 8th Grade And High School Students With The Properties Of Several Angle Pairs Like The Alternate Angles Corresponding Angles Same Side Angles Etc Formed When A Transversal Cuts A Pair Of Parallel Lines Our All New Resources Facilitate A Comprehensive Practice Of The Two Broad Categories Of Angles The
Angles In Parallel Lines Worksheet Teaching Resources
This Simple Worksheet Is A Good Way To Introduce Review Angles In Parallel Lines It Begins With Diagrams Of Corresponding Alternate And Allied Supplementary Angles Then There Are Some Examples To Work Through With Your Class On The Second Page There Is A Short Exercise With Similar Problems For The Class To Do Themselves
Understand Alternate Corresponding And Co Interior Angles
24 June Learn About Alternate Corresponding And Co Interior Angles And Solve Angle Problems When Working With Parallel And Intersecting Lines
People interested in Alternate And Corresponding Angles Worksheet also searched for :
Alternate And Corresponding Angles Worksheet. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use.
You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now!
Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Alternate And Corresponding Angles Worksheet.
There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible.
Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early.
Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too.
The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused.
Alternate And Corresponding Angles Worksheet. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect.
Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice.
Tags: #congruent angles worksheets#calculating angles worksheet#parallel lines and angles worksheet#vertically opposite angles worksheet#angles fun worksheets#angles worksheet year 7#identify angles worksheet#easy angles worksheet#angle pairs worksheet#angle relationships worksheet
|
Stereoscopy (also called stereoscopics, or stereo imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek στερεός (stereos) 'firm, solid', and σκοπέω (skopeō) 'to look, to see'. Any stereoscopic image is called a stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope.
Most stereoscopic methods present a pair of two-dimensional images to the viewer. The left image is presented to the left eye and the right image is presented to the right eye. When viewed, the human brain perceives the images as a single 3D view, giving the viewer the perception of 3D depth. However, the 3D effect lacks proper focal depth, which gives rise to the Vergence-Accommodation Conflict.
Stereoscopy is distinguished from other types of 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3-dimensional objects being displayed by head and eye movements.
Stereoscopy creates the illusion of three-dimensional depth from a pair of two-dimensional images. Human vision, including the perception of depth, is a complex process, which only begins with the acquisition of visual information taken in through the eyes; much processing ensues within the brain, as it strives to make sense of the raw information. One of the functions that occur within the brain as it interprets what the eyes see is assessing the relative distances of objects from the viewer, and the depth dimension of those objects. The cues that the brain uses to gauge relative distances and depth in a perceived scene include:
- Occlusion - The overlapping of one object by another
- Subtended visual angle of an object of known size
- Linear perspective (convergence of parallel edges)
- Vertical position (objects closer to the horizon in the scene tend to be perceived as farther away)
- Haze or contrast, saturation, and color, greater distance generally being associated with greater haze, desaturation, and a shift toward blue
- Change in size of textured pattern detail
(All but the first two of the above cues exist in traditional two-dimensional images, such as paintings, photographs, and television.)
Stereoscopy is the production of the illusion of depth in a photograph, movie, or other two-dimensional image by the presentation of a slightly different image to each eye, which adds the first of these cues (stereopsis). The two images are then combined in the brain to give the perception of depth. Because all points in the image produced by stereoscopy focus at the same plane regardless of their depth in the original scene, the second cue, focus, is not duplicated and therefore the illusion of depth is incomplete. There are also mainly two effects of stereoscopy that are unnatural for human vision: (1) the mismatch between convergence and accommodation, caused by the difference between an object's perceived position in front of or behind the display or screen and the real origin of that light; and (2) possible crosstalk between the eyes, caused by imperfect image separation in some methods of stereoscopy.
Although the term "3D" is ubiquitously used, the presentation of dual 2D images is distinctly different from displaying an image in three full dimensions. The most notable difference is that, in the case of "3D" displays, the observer's head and eye movement do not change the information received about the 3-dimensional objects being viewed. Holographic displays and volumetric display do not have this limitation. Just as it is not possible to recreate a full 3-dimensional sound field with just two stereophonic speakers, it is an overstatement to call dual 2D images "3D". The accurate term "stereoscopic" is more cumbersome than the common misnomer "3D", which has been entrenched by many decades of unquestioned misuse. Although most stereoscopic displays do not qualify as real 3D display, all real 3D displays are also stereoscopic displays because they meet the lower criteria also.
Most 3D displays use this stereoscopic method to convey images. It was first invented by Sir Charles Wheatstone in 1838, and improved by Sir David Brewster who made the first portable 3D viewing device.
Wheatstone originally used his stereoscope (a rather bulky device) with drawings because photography was not yet available, yet his original paper seems to foresee the development of a realistic imaging method:
For the purposes of illustration I have employed only outline figures, for had either shading or colouring been introduced it might be supposed that the effect was wholly or in part due to these circumstances, whereas by leaving them out of consideration no room is left to doubt that the entire effect of relief is owing to the simultaneous perception of the two monocular projections, one on each retina. But if it be required to obtain the most faithful resemblances of real objects, shadowing and colouring may properly be employed to heighten the effects. Careful attention would enable an artist to draw and paint the two component pictures, so as to present to the mind of the observer, in the resultant perception, perfect identity with the object represented. Flowers, crystals, busts, vases, instruments of various kinds, &c., might thus be represented so as not to be distinguished by sight from the real objects themselves.
Stereoscopy is used in photogrammetry and also for entertainment through the production of stereograms. Stereoscopy is useful in viewing images rendered from large multi-dimensional data sets such as are produced by experimental data. Modern industrial three-dimensional photography may use 3D scanners to detect and record three-dimensional information. The three-dimensional depth information can be reconstructed from two images using a computer by correlating the pixels in the left and right images. Solving the Correspondence problem in the field of Computer Vision aims to create meaningful depth information from two images.
Anatomically, there are 3 levels of binocular vision required to view stereo images:
- Simultaneous perception
- Fusion (binocular 'single' vision)
These functions develop in early childhood. Some people who have strabismus disrupt the development of stereopsis, however orthoptics treatment can be used to improve binocular vision. A person's stereoacuity determines the minimum image disparity they can perceive as depth. It is believed that approximately 12% of people are unable to properly see 3D images, due to a variety of medical conditions. According to another experiment up to 30% of people have very weak stereoscopic vision preventing them from depth perception based on stereo disparity. This nullifies or greatly decreases immersion effects of stereo to them.
Stereoscopic viewing may be artificially created by the viewer's brain, as demonstrated with the Van Hare Effect, where the brain perceives stereo images even when the paired photographs are identical. This "false dimensionality" results from the developed stereoacuity in the brain, allowing the viewer to fill in depth information even when few if any 3D cues are actually available in the paired images.
Traditional stereoscopic photography consists of creating a 3D illusion starting from a pair of 2D images, a stereogram. The easiest way to enhance depth perception in the brain is to provide the eyes of the viewer with two different images, representing two perspectives of the same object, with a minor deviation equal or nearly equal to the perspectives that both eyes naturally receive in binocular vision.
To avoid eyestrain and distortion, each of the two 2D images should be presented to the viewer so that any object at infinite distance is perceived by the eye as being straight ahead, the viewer's eyes being neither crossed nor diverging. When the picture contains no object at infinite distance, such as a horizon or a cloud, the pictures should be spaced correspondingly closer together.
The advantages of side-by-side viewers is the lack of diminution of brightness, allowing the presentation of images at very high resolution and in full spectrum color, simplicity in creation, and little or no additional image processing is required. Under some circumstances, such as when a pair of images is presented for freeviewing, no device or additional optical equipment is needed.
The principal disadvantage of side-by-side viewers is that large image displays are not practical and resolution is limited by the lesser of the display medium or human eye. This is because as the dimensions of an image are increased, either the viewing apparatus or viewer themselves must move proportionately further away from it in order to view it comfortably. Moving closer to an image in order to see more detail would only be possible with viewing equipment that adjusted to the difference.
Freeviewing is viewing a side-by-side image pair without using a viewing device.
- The parallel viewing method uses an image pair with the left-eye image on the left and the right-eye image on the right. The fused three-dimensional image appears larger and more distant than the two actual images, making it possible to convincingly simulate a life-size scene. The viewer attempts to look through the images with the eyes substantially parallel, as if looking at the actual scene. This can be difficult with normal vision because eye focus and binocular convergence are habitually coordinated. One approach to decoupling the two functions is to view the image pair extremely close up with completely relaxed eyes, making no attempt to focus clearly but simply achieving comfortable stereoscopic fusion of the two blurry images by the "look-through" approach, and only then exerting the effort to focus them more clearly, increasing the viewing distance as necessary. Regardless of the approach used or the image medium, for comfortable viewing and stereoscopic accuracy the size and spacing of the images should be such that the corresponding points of very distant objects in the scene are separated by the same distance as the viewer's eyes, but not more; the average interocular distance is about 63 mm. Viewing much more widely separated images is possible, but because the eyes never diverge in normal use it usually requires some previous training and tends to cause eye strain.
- The cross-eyed viewing method swaps the left and right eye images so that they will be correctly seen cross-eyed, the left eye viewing the image on the right and vice versa. The fused three-dimensional image appears to be smaller and closer than the actual images, so that large objects and scenes appear miniaturized. This method is usually easier for freeviewing novices. As an aid to fusion, a fingertip can be placed just below the division between the two images, then slowly brought straight toward the viewer's eyes, keeping the eyes directed at the fingertip; at a certain distance, a fused three-dimensional image should seem to be hovering just above the finger. Alternatively, a piece of paper with a small opening cut into it can be used in a similar manner; when correctly positioned between the image pair and the viewer's eyes, it will seem to frame a small three-dimensional image.
Prismatic, self-masking glasses are now being used by some cross-eyed-view advocates. These reduce the degree of convergence required and allow large images to be displayed. However, any viewing aid that uses prisms, mirrors or lenses to assist fusion or focus is simply a type of stereoscope, excluded by the customary definition of freeviewing.
Stereoscopically fusing two separate images without the aid of mirrors or prisms while simultaneously keeping them in sharp focus without the aid of suitable viewing lenses inevitably requires an unnatural combination of eye vergence and accommodation. Simple freeviewing therefore cannot accurately reproduce the physiological depth cues of the real-world viewing experience. Different individuals may experience differing degrees of ease and comfort in achieving fusion and good focus, as well as differing tendencies to eye fatigue or strain.
An autostereogram is a single-image stereogram (SIS), designed to create the visual illusion of a three-dimensional (3D) scene within the human brain from an external two-dimensional image. In order to perceive 3D shapes in these autostereograms, one must overcome the normally automatic coordination between focusing and vergence.
Stereoscope and stereographic cards
The stereoscope is essentially an instrument in which two photographs of the same object, taken from slightly different angles, are simultaneously presented, one to each eye. A simple stereoscope is limited in the size of the image that may be used. A more complex stereoscope uses a pair of horizontal periscope-like devices, allowing the use of larger images that can present more detailed information in a wider field of view. One can buy historical stereoscopes such as Holmes stereoscopes as antiques.
Some stereoscopes are designed for viewing transparent photographs on film or glass, known as transparencies or diapositives and commonly called slides. Some of the earliest stereoscope views, issued in the 1850s, were on glass. In the early 20th century, 45x107 mm and 6x13 cm glass slides were common formats for amateur stereo photography, especially in Europe. In later years, several film-based formats were in use. The best-known formats for commercially issued stereo views on film are Tru-Vue, introduced in 1931, and View-Master, introduced in 1939 and still in production. For amateur stereo slides, the Stereo Realist format, introduced in 1947, is by far the most common.
The user typically wears a helmet or glasses with two small LCD or OLED displays with magnifying lenses, one for each eye. The technology can be used to show stereo films, images or games, but it can also be used to create a virtual display. Head-mounted displays may also be coupled with head-tracking devices, allowing the user to "look around" the virtual world by moving their head, eliminating the need for a separate controller. Performing this update quickly enough to avoid inducing nausea in the user requires a great amount of computer image processing. If six axis position sensing (direction and position) is used then wearer may move about within the limitations of the equipment used. Owing to rapid advancements in computer graphics and the continuing miniaturization of video and other equipment these devices are beginning to become available at more reasonable cost.
Head-mounted or wearable glasses may be used to view a see-through image imposed upon the real world view, creating what is called augmented reality. This is done by reflecting the video images through partially reflective mirrors. The real world view is seen through the mirrors' reflective surface. Experimental systems have been used for gaming, where virtual opponents may peek from real windows as a player moves about. This type of system is expected to have wide application in the maintenance of complex systems, as it can give a technician what is effectively "x-ray vision" by combining computer graphics rendering of hidden elements with the technician's natural vision. Additionally, technical data and schematic diagrams may be delivered to this same equipment, eliminating the need to obtain and carry bulky paper documents.
Virtual retinal displays
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), not to be confused with a "Retina Display", is a display technology that draws a raster image (like a television picture) directly onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them. For true stereoscopy, each eye must be provided with its own discrete display. To produce a virtual display that occupies a usefully large visual angle but does not involve the use of relatively large lenses or mirrors, the light source must be very close to the eye. A contact lens incorporating one or more semiconductor light sources is the form most commonly proposed. As of 2013, the inclusion of suitable light-beam-scanning means in a contact lens is still very problematic, as is the alternative of embedding a reasonably transparent array of hundreds of thousands (or millions, for HD resolution) of accurately aligned sources of collimated light.
There are two categories of 3D viewer technology, active and passive. Active viewers have electronics which interact with a display. Passive viewers filter constant streams of binocular input to the appropriate eye.
A shutter system works by openly presenting the image intended for the left eye while blocking the right eye's view, then presenting the right-eye image while blocking the left eye, and repeating this so rapidly that the interruptions do not interfere with the perceived fusion of the two images into a single 3D image. It generally uses liquid crystal shutter glasses. Each eye's glass contains a liquid crystal layer which has the property of becoming dark when voltage is applied, being otherwise transparent. The glasses are controlled by a timing signal that allows the glasses to alternately darken over one eye, and then the other, in synchronization with the refresh rate of the screen. The main drawback of active shutters is that most 3D videos and movies were shot with simultaneous left and right views, so that it introduces a "time parallax" for anything side-moving: for instance, someone walking at 3.4 mph will be seen 20% too close or 25% too remote in the most current case of a 2x60 Hz projection.
To present stereoscopic pictures, two images are projected superimposed onto the same screen through polarizing filters or presented on a display with polarized filters. For projection, a silver screen is used so that polarization is preserved. On most passive displays every other row of pixels is polarized for one eye or the other. This method is also known as being interlaced. The viewer wears low-cost eyeglasses which also contain a pair of opposite polarizing filters. As each filter only passes light which is similarly polarized and blocks the opposite polarized light, each eye only sees one of the images, and the effect is achieved.
Interference filter systems
This technique uses specific wavelengths of red, green, and blue for the right eye, and different wavelengths of red, green, and blue for the left eye. Eyeglasses which filter out the very specific wavelengths allow the wearer to see a full color 3D image. It is also known as spectral comb filtering or wavelength multiplex visualization or super-anaglyph. Dolby 3D uses this principle. The Omega 3D/Panavision 3D system has also used an improved version of this technology In June 2012 the Omega 3D/Panavision 3D system was discontinued by DPVO Theatrical, who marketed it on behalf of Panavision, citing ″challenging global economic and 3D market conditions″.
Color anaglyph systems
Anaglyph 3D is the name given to the stereoscopic 3D effect achieved by means of encoding each eye's image using filters of different (usually chromatically opposite) colors, typically red and cyan. Red-cyan filters can be used because our vision processing systems use red and cyan comparisons, as well as blue and yellow, to determine the color and contours of objects. Anaglyph 3D images contain two differently filtered colored images, one for each eye. When viewed through the "color-coded" "anaglyph glasses", each of the two images reaches one eye, revealing an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a three dimensional scene or composition.
The ChromaDepth procedure of American Paper Optics is based on the fact that with a prism, colors are separated by varying degrees. The ChromaDepth eyeglasses contain special view foils, which consist of microscopically small prisms. This causes the image to be translated a certain amount that depends on its color. If one uses a prism foil now with one eye but not on the other eye, then the two seen pictures – depending upon color – are more or less widely separated. The brain produces the spatial impression from this difference. The advantage of this technology consists above all of the fact that one can regard ChromaDepth pictures also without eyeglasses (thus two-dimensional) problem-free (unlike with two-color anaglyph). However the colors are only limitedly selectable, since they contain the depth information of the picture. If one changes the color of an object, then its observed distance will also be changed.
The Pulfrich effect is based on the phenomenon of the human eye processing images more slowly when there is less light, as when looking through a dark lens. Because the Pulfrich effect depends on motion in a particular direction to instigate the illusion of depth, it is not useful as a general stereoscopic technique. For example, it cannot be used to show a stationary object apparently extending into or out of the screen; similarly, objects moving vertically will not be seen as moving in depth. Incidental movement of objects will create spurious artifacts, and these incidental effects will be seen as artificial depth not related to actual depth in the scene.
Stereoscopic viewing is achieved by placing an image pair one above one another. Special viewers are made for over/under format that tilt the right eyesight slightly up and the left eyesight slightly down. The most common one with mirrors is the View Magic. Another with prismatic glasses is the KMQ viewer. A recent usage of this technique is the openKMQ project.
Other display methods without viewers
Autostereoscopic display technologies use optical components in the display, rather than worn by the user, to enable each eye to see a different image. Because headgear is not required, it is also called "glasses-free 3D". The optics split the images directionally into the viewer's eyes, so the display viewing geometry requires limited head positions that will achieve the stereoscopic effect. Automultiscopic displays provide multiple views of the same scene, rather than just two. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The technology includes two broad classes of displays: those that use head-tracking to ensure that each of the viewer's two eyes sees a different image on the screen, and those that display multiple views so that the display does not need to know where the viewers' eyes are directed. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, volumetric display, holography and light field displays.
Laser holography, in its original "pure" form of the photographic transmission hologram, is the only technology yet created which can reproduce an object or scene with such complete realism that the reproduction is visually indistinguishable from the original, given the original lighting conditions. It creates a light field identical to that which emanated from the original scene, with parallax about all axes and a very wide viewing angle. The eye differentially focuses objects at different distances and subject detail is preserved down to the microscopic level. The effect is exactly like looking through a window. Unfortunately, this "pure" form requires the subject to be laser-lit and completely motionless—to within a minor fraction of the wavelength of light—during the photographic exposure, and laser light must be used to properly view the results. Most people have never seen a laser-lit transmission hologram. The types of holograms commonly encountered have seriously compromised image quality so that ordinary white light can be used for viewing, and non-holographic intermediate imaging processes are almost always resorted to, as an alternative to using powerful and hazardous pulsed lasers, when living subjects are photographed.
Although the original photographic processes have proven impractical for general use, the combination of computer-generated holograms (CGH) and optoelectronic holographic displays, both under development for many years, has the potential to transform the half-century-old pipe dream of holographic 3D television into a reality; so far, however, the large amount of calculation required to generate just one detailed hologram, and the huge bandwidth required to transmit a stream of them, have confined this technology to the research laboratory.
In 2013, a Silicon Valley company, LEIA Inc, started manufacturing holographic displays well suited for mobile devices (watches, smartphones or tablets) using a multi-directional backlight and allowing a wide full-parallax angle view to see 3D content without the need of glasses.
Volumetric displays use some physical mechanism to display points of light within a volume. Such displays use voxels instead of pixels. Volumetric displays include multiplanar displays, which have multiple display planes stacked up, and rotating panel displays, where a rotating panel sweeps out a volume.
Other technologies have been developed to project light dots in the air above a device. An infrared laser is focused on the destination in space, generating a small bubble of plasma which emits visible light.
Integral imaging is a technique for producing 3D displays which are both autostereoscopic and multiscopic, meaning that the 3D image is viewed without the use of special glasses and different aspects are seen when it is viewed from positions that differ either horizontally or vertically. This is achieved by using an array of microlenses (akin to a lenticular lens, but an X–Y or "fly's eye" array in which each lenslet typically forms its own image of the scene without assistance from a larger objective lens) or pinholes to capture and display the scene as a 4D light field, producing stereoscopic images that exhibit realistic alterations of parallax and perspective when the viewer moves left, right, up, down, closer, or farther away.
Integral imaging may not technically be a type of autostereoscopy, as autostereoscopy still refers to the generation of two images.
Wiggle stereoscopy is an image display technique achieved by quickly alternating display of left and right sides of a stereogram. Found in animated GIF format on the web, online examples are visible in the New-York Public Library stereogram collection. The technique is also known as "Piku-Piku".
Stereo photography techniques
For general-purpose stereo photography, where the goal is to duplicate natural human vision and give a visual impression as close as possible to actually being there, the correct baseline (distance between where the right and left images are taken) would be the same as the distance between the eyes. When images taken with such a baseline are viewed using a viewing method that duplicates the conditions under which the picture is taken, then the result would be an image much the same as that which would be seen at the site the photo was taken. This could be described as "ortho stereo."
However, there are situations in which it might be desirable to use a longer or shorter baseline. The factors to consider include the viewing method to be used and the goal in taking the picture. The concept of baseline also applies to other branches of stereography, such as stereo drawings and computer generated stereo images, but it involves the point of view chosen rather than actual physical separation of cameras or lenses.
The concept of the stereo window is always important, since the window is the stereoscopic image of the external boundaries of left and right views constituting the stereoscopic image. If any object, which is cut off by lateral sides of the window, is placed in front of it, an effect results that is unnatural and is undesirable, this is called a "window violation". This can best be understood by returning to the analogy of an actual physical window. Therefore, there is a contradiction between two different depth cues: some elements of the image are hidden by the window, so that the window appears as closer than these elements, and the same elements of the image appear as closer than the window. So that the stereo window must always be adjusted to avoid window violations.
Some objects can be seen in front of the window, as far as they do not reach the lateral sides of the window. But these objects can not be seen as too close, since there is always a limit of the parallax range for comfortable viewing.
If a scene is viewed through a window the entire scene would normally be behind the window, if the scene is distant, it would be some distance behind the window, if it is nearby, it would appear to be just beyond the window. An object smaller than the window itself could even go through the window and appear partially or completely in front of it. The same applies to a part of a larger object that is smaller than the window. The goal of setting the stereo window is to duplicate this effect.
Therefore, the location of the window versus the whole of the image must be adjusted so that most of the image is seen beyond the window. In the case of viewing on a 3D TV set, it is easier to place the window in front of the image, and to let the window in the plane of the screen.
On the contrary, in the case of projection on a much larger screen, it is much better to set the window in front of the screen (it is called "floating window"), for instance so that it is viewed about two meters away by the viewers sit in the first row. Therefore, these people will normally see the background of the image at the infinite. Of course the viewers seated beyond will see the window more remote, but if the image is made in normal conditions, so that the first row viewers see this background at the infinite, the other viewers, seated behind, will also see this background at the infinite, since the parallax of this background is equal to the average human interocular.
The entire scene, including the window, can be moved backwards or forwards in depth, by horizontally sliding the left and right eye views relative to each other. Moving either or both images away from the center will bring the whole scene away from the viewer, whereas moving either or both images toward the center will move the whole scene toward the viewer. This is possible, for instance, if two projectors are used for this projection.
In stereo photography window adjustments is accomplished by shifting/cropping the images, in other forms of stereoscopy such as drawings and computer generated images the window is built into the design of the images as they are generated.
The images can be cropped creatively to create a stereo window that is not necessarily rectangular or lying on a flat plane perpendicular to the viewer's line of sight. The edges of the stereo frame can be straight or curved and, when viewed in 3D, can flow toward or away from the viewer and through the scene. These designed stereo frames can help emphasize certain elements in the stereo image or can be an artistic component of the stereo image.
While stereoscopic images have typically been used for amusement, including stereographic cards, 3D films, 3D television, stereoscopic video games, printings using anaglyph and pictures, posters and books of autostereograms, there are also other uses of this technology.
Salvador Dalí created some impressive stereograms in his exploration in a variety of optical illusions. Other stereo artists include Zoe Beloff, Christopher Schneberger, Rebecca Hackemann, William Kentridge, and Jim Naughten. Red-and-cyan anaglyph stereoscopic images have also been painted by hand.
In the 19th century, it was realized that stereoscopic images provided an opportunity for people to experience places and things far away, and many tour sets were produced, and books were published allowing people to learn about geography, science, history, and other subjects. Such uses continued till the mid-20th century, with the Keystone View Company producing cards into the 1960s.
The two cameras that make up each rover's Pancam are situated 1.5m above the ground surface, and are separated by 30 cm, with 1 degree of toe-in. This allows the image pairs to be made into scientifically useful stereoscopic images, which can be viewed as stereograms, anaglyphs, or processed into 3D computer images.
The ability to create realistic 3D images from a pair of cameras at roughly human-height gives researchers increased insight as to the nature of the landscapes being viewed. In environments without hazy atmospheres or familiar landmarks, humans rely on stereoscopic clues to judge distance. Single camera viewpoints are therefore more difficult to interpret. Multiple camera stereoscopic systems like the Pancam address this problem with uncrewed space exploration.
Mathematical, scientific and engineering uses
Stereopair photographs provided a way for 3-dimensional (3D) visualisations of aerial photographs; since about 2000, 3D aerial views are mainly based on digital stereo imaging technologies. One issue related to stereo images is the amount of disk space needed to save such files. Indeed, a stereo image usually requires twice as much space as a normal image. Recently, computer vision scientists tried to find techniques to attack the visual redundancy of stereopairs with the aim to define compressed version of stereopair files. Cartographers generate today stereopairs using computer programs in order to visualise topography in three dimensions. Computerised stereo visualisation applies stereo matching programs. In biology and chemistry, complex molecular structures are often rendered in stereopairs. The same technique can also be applied to any mathematical (or scientific, or engineering) parameter that is a function of two variables, although in these cases it is more common for a three-dimensional effect to be created using a 'distorted' mesh or shading (as if from a distant light source).
- "The Kaiser (Emperor) Panorama". 9 June 2012.
- The Logical Approach to Seeing 3D Pictures. www.vision3d.com by Optometrists Network. Retrieved 2009-08-21
- στερεός Tufts.edu, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- σκοπέω, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- Exercises in Three Dimensions: About 3D, Tom Lincoln, 2011
- Flight Simulation, J. M. Rolfe and K. J. Staples, Cambridge University Press, 1986, page 134
- Exercises in Three Dimensions, Tom Lincoln, 2011
- Contributions to the Physiology of Vision.—Part the First. On some remarkable, and hitherto unobserved, Phenomena of Binocular Vision. By CHARLES WHEATSTONE, F.R.S., Professor of Experimental Philosophy in King's College, London. Stereoscopy.com
- Welling, William. Photography in America, page 23
- International Stereoscopic Union, 2006, "Stereoscopy", Numbers 65–72, p.18
- Stereo Realist Manual, p. 375.
- Stereo Realist Manual, pp. 377–379.
- Fay Huang, Reinhard Klette, and Karsten Scheibe: Panoramic Imaging (Sensor-Line Cameras and Laser Range-Finders). Wiley & Sons, Chichester, 2008
- Dornaika, F.; Hammoudi, K (2009). Extracting 3D Polyhedral Building Models from Aerial Images using a Featureless and Direct Approach (PDF). Machine Vision Applications. Vol. Proc. IAPR/MVA. Retrieved 26 September 2010.
- How To Freeview Stereo (3D) Images. Greg Erker. Retrieved 2009-08-21
- "Eyecare Trust". Eyecare Trust. Retrieved 29 March 2012.
- "Daily Telegraph Newspaper". The Daily Telegraph. Archived from the original on 12 January 2022. Retrieved 29 March 2012.
- "Understanding Requirements for High-Quality 3D Video: A Test in Stereo Perception". 3droundabout.com. 19 December 2011. Retrieved 29 March 2012.
- How to View Photos on This Site. Stereo Photography – The World in 3D. Retrieved 2009-08-21
- Tseng, Belle; Anastassiou, Dimitris. "Compatible Video Coding of Stereoscopic Sequences using MPEG-2's Scalability and Interlaced Structure" (PDF). Columbia University. Retrieved 8 July 2014.
- "Seeing is believing""; Cinema Technology, Vol 24, No.1 March 2011
- "Exercises in Three Dimensions: About 3D".
- O'Doherty, M; Flitcroft, D I (1 August 2007). "An unusual presentation of optic neuritis and the Pulfrich phenomenon". Journal of Neurology, Neurosurgery & Psychiatry. 78 (8): 906–907. doi:10.1136/jnnp.2006.094771. ISSN 0022-3050. PMC 2117749. PMID 17635984.
- "Glossary". 8 June 2012.
- "openKMQ". 8 June 2012. Archived from the original on 5 March 2009.
- Beausoleil, Raymond G.; Brug, Jim; Fiorentino, Marco; Vo, Sonny; Tran, Tho; Peng, Zhen; Fattal, David (March 2013). "A multi-directional backlight for a wide-angle, glasses-free three-dimensional display". Nature. 495 (7441): 348–351. Bibcode:2013Natur.495..348F. doi:10.1038/nature11972. ISSN 1476-4687. PMID 23518562. S2CID 4424212.
- Curtin, Dennis P. "ShortCourses-Stereo Photography-Simulated 3D—Wiggle 3D". www.shortcourses.com.
- DrT (25 February 2008). "Dr. T". Drt3d.blogspot.com. Retrieved 4 March 2012.
- Banks, Martin S.; Read, Jenny R.; Allison, Robert S.; Watt, Simon J. (June 2011). "Stereoscopy and the Human Visual System". SMPTE 2nd Annual International Conference on Stereoscopic 3D for Media and Entertainment. New York, NY, USA: IEEE. 121 (4): 2–31. doi:10.5594/M001418. ISBN 9781614829515. PMC 3490636. PMID 23144596.
- Horibuchi, S. (1994). Salvador Dalí: the stereo pair artist. In Horibuchi, S. (Ed.), Stereogram (pp.9, pp.42). San Francisco: Cadence Books. ISBN 0-929279-85-9
- "Tom Lincoln - Exercises in Three Dimensions".
- University of Virginia The Stereoscope in America, accessed 21 March 2009.
- "Pancam technical brief" (PDF). Cornell University. Retrieved 30 June 2006.
- Bartiss, OD MD, Michael (25 January 2005). "Convergence Insufficiency". WebMD. Retrieved 30 June 2006.
- "Algorithm for stereoscopic image compression".
- Ortis, Alessandro; Rundo, Francesco; Di Giore, Giuseppe; Battiato, Sebastiano (2013). "Adaptive Compression of Stereoscopic Images" (PDF). Image Analysis and Processing – ICIAP 2013. Lecture Notes in Computer Science. Vol. 8156. pp. 391–399. doi:10.1007/978-3-642-41181-6_40. ISBN 978-3-642-41180-9. S2CID 13274055.
- David F. Watson (1992). Contouring. A Guide to the Analysis and Display of Spatial Data (with programs on diskette). In: Daniel F. Merriam (Ed.); Computer Methods in the Geosciences; Pergamon / Elsevier Science, Amsterdam; 321 pp. ISBN 0-08-040286-0
- Reinhard Klette (2014). "Concise Computer Vision" (see Chapter 8 for stereo matching). Springer, London; 429 pp. ISBN 978-1-4471-6319-0
- Simmons, Gordon (March–April 1996). "Clarence G. Henning: The Man Behind the Macro". Stereo World. 23 (1): 37–43.
- Willke, Mark A.; Zakowski, Ron (March–April 1996). "A Close Look into the Realist Macro Stereo System". Stereo World. 23 (1): 14–35.
- Morgan, Willard D.; Lester, Henry M. (October 1954). Stereo Realist Manual. and 14 contributors. New York: Morgan & Lester. Bibcode:1954srm..book.....M. OCLC 789470.
- Scott B. Steinman, Barbara A. Steinman and Ralph Philip Garzia. (2000). Foundations of Binocular Vision: A Clinical perspective. McGraw-Hill Medical. ISBN 0-8385-2670-5
- Guide to the Edward R. Frank Stereograph Collection. Special Collections and Archives, The UC Irvine Libraries, Irvine, California.
- Niagara Falls Stereo Cards RG 541 Brock University Library Digital Repository
- Stereographic Views of Louisville and Beyond, 1850s-1930 from the University of Louisville Archives & Special Collections
- Stereoscopy at Curlie
- Durham Visualization Laboratory stereoscopic imaging methods and software tools
- University of Washington Libraries Digital Collections Stereocard Collection
- Stereoscopy on Flickr
- American University in Cairo Rare Books and Special Collections Digital Library Underwood & Underwood Egypt Stereoviews Collection
- Views of California and the West, ca. 1867–1903, The Bancroft Library
- Museum exhibition on the history of stereographs and stereoscopes (1850–1930)
- Two stereoscopic selfies from 1890
|
Microcode is a layer of hardware-level instructions or data structures involved in the implementation of higher level machine code instructions in central processing units, and in the implementation of the internal logic of many channel controllers, disk controllers, network interface controllers, network processors, graphics processing units, and other hardware. It resides in special high-speed memory and translates machine instructions into sequences of detailed circuit-level operations. It helps separate the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also makes it feasible to build complex multi-step instructions while still reducing the complexity of the electronic circuitry compared to other methods. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram.
Modern microcode is normally written by an engineer during the processor design phase and stored in a ROM (read-only memory) or PLA (programmable logic array) structure, or a combination of both. However, machines also exist that have some (or all) microcode stored in SRAM or flash memory. This is traditionally denoted a "writeable control store" in the context of computers. Complex digital processors may also employ more than one (possibly microcode based) control unit in order to delegate sub-tasks that must be performed (more or less) asynchronously in parallel. Microcode is generally not visible or changeable by a normal programmer, not even by an assembly programmer. Unlike machine code, which often retains some compatibility among different processors in a family, microcode only runs on the exact electronic circuitry for which it is designed, as it constitutes an inherent part of the particular processor design itself.
More extensive microcoding has also been used to allow small and simple microarchitectures to emulate more powerful architectures with wider word length, more execution units and so on; a relatively simple way to achieve software compatibility between different products in a processor family.
Some hardware vendors, especially IBM, use the term as a synonym for firmware, so that all code in a device, whether microcode or machine code, is termed microcode (such as in a hard drive for instance, which typically contains both).
When compared to normal application programs, the elements composing a microprogram exist on a lower conceptual level. To avoid confusion, each microprogram-related element is differentiated by the "micro" prefix: microinstruction, microassembler, microprogrammer, microarchitecture, etc.
The microcode usually does not reside in the main memory, but in a special high-speed memory called the control store, which can be either read-only or read-write memory. In the latter case, the CPU initialization process loads microcode into the control store from another storage medium, with the possibility of altering the microcode to correct bugs in the instruction set, or to implement new machine instructions.
Microprograms consist of series of microinstructions, which control the CPU at a very fundamental level of hardware circuitry. For example, a single typical microinstruction might specify the following operations:
- Connect Register 1 to the "A" side of the ALU
- Connect Register 7 to the "B" side of the ALU
- Set the ALU to perform two's-complement addition
- Set the ALU's carry input to zero
- Store the result value in Register 8
- Update the "condition codes" with the ALU status flags ("Negative", "Zero", "Overflow", and "Carry")
- Microjump to MicroPC nnn for the next microinstruction
To simultaneously control all processor's features in one cycle, the microinstruction is often wider than 50 bits, e.g., 128 bits on a 360/85 with an emulator feature. Microprograms are carefully designed and optimized for the fastest possible execution, as a slow microprogram would result in a slow machine instruction and degraded performance for related application programs that use such instructions.
The reason for microprogramming
Microcode was originally developed as a simpler method of developing the control logic for a computer. Initially, CPU instruction sets were "hardwired". Each step needed to fetch, decode, and execute the machine instructions (including any operand address calculations, reads, and writes) was controlled directly by combinational logic and rather minimal sequential state machine circuitry. While very efficient, the need for powerful instruction sets with multi-step addressing and complex operations (see below) made such hard-wired processors difficult to design and debug; highly encoded and varied-length instructions can contribute to this as well, especially when very irregular encodings are used.
Microcode simplified the job by allowing much of the processor's behaviour and programming model to be defined via microprogram routines rather than by dedicated circuitry. Even late in the design process, microcode could easily be changed, whereas hard-wired CPU designs were very cumbersome to change. Thus, this greatly facilitated CPU design.
From the 1940s to the late 1970s, much programming was done in assembly language; higher level instructions meant greater programmer productivity, so an important advantage of microcode was the relative ease by which powerful machine instructions could be defined. During the 1970s, CPU speeds grew more quickly than memory speeds and numerous techniques such as memory block transfer, memory pre-fetch and multi-level caches were used to alleviate this. High level machine instructions, made possible by microcode, helped further, as fewer more complex machine instructions require less memory bandwidth. For example, an operation on a character string could be done as a single machine instruction, thus avoiding multiple instruction fetches.
Architectures with instruction sets implemented by complex microprograms included the IBM System/360 and Digital Equipment Corporation VAX. The approach of increasingly complex microcode-implemented instruction sets was later called CISC. An alternate approach, used in many microprocessors, is to use PLAs or ROMs (instead of combinational logic) mainly for instruction decoding, and let a simple state machine (without much, or any, microcode) do most of the sequencing.
Microprogramming is still used in modern CPU designs. In some cases, after the microcode is debugged in simulation, logic functions are substituted for the control store. Logic functions are often faster and less expensive than the equivalent microprogram memory.
A processor's microprograms operate on a more primitive, totally different and much more hardware-oriented architecture than the assembly instructions visible to normal programmers. In coordination with the hardware, the microcode implements the programmer-visible architecture. The underlying hardware need not have a fixed relationship to the visible architecture. This makes it easier to implement a given instruction set architecture on a wide variety of underlying hardware micro-architectures.
The IBM System/360 had a 32-bit architecture with 16 general-purpose registers, but most of the System/360 implementations actually use hardware that implemented a much simpler underlying microarchitecture; for example, the System/360 Model 30 had 8-bit data paths to the arithmetic logic unit (ALU) and main memory and implemented the general-purpose registers in a special unit of higher-speed core memory, and the System/360 Model 40 had 8-bit data paths to the ALU and 16-bit data paths to main memory and also implemented the general-purpose registers in a special unit of higher-speed core memory. The Model 50 and Model 65 had full 32-bit data paths; the Model 50 implemented the general-purpose registers in a special unit of higher-speed core memory and the Model 65 implemented the general-purpose registers in faster transistor circuits. In this way, microprogramming enabled IBM to design many System/360 models with substantially different hardware and spanning a wide range of cost and performance, while making them all architecturally compatible. This dramatically reduced the number of unique system software programs that had to be written for each model.
A similar approach was used by Digital Equipment Corporation in their VAX family of computers. Initially a 32-bit TTL processor in conjunction with supporting microcode implemented the programmer-visible architecture. Later VAX versions used different microarchitectures, yet the programmer-visible architecture did not change.
Microprogramming also reduced the cost of field changes to correct defects (bugs) in the processor; a bug could often be fixed by replacing a portion of the microprogram rather than by changes being made to hardware logic and wiring.
In 1947, the design of the MIT Whirlwind introduced the concept of a control store as a way to simplify computer design and move beyond ad hoc methods. The control store was a diode matrix: a two-dimensional lattice, where one dimension accepted "control time pulses" from the CPU's internal clock, and the other connected to control signals on gates and other circuits. A "pulse distributor" would take the pulses generated by the CPU clock and break them up into eight separate time pulses, each of which would activate a different row of the lattice. When the row was activated, it would activate the control signals connected to it.
Described another way, the signals transmitted by the control store are being played much like a player piano roll. That is, they are controlled by a sequence of very wide words constructed of bits, and they are "played" sequentially. In a control store, however, the "song" is short and repeated continuously.
In 1951 Maurice Wilkes enhanced this concept by adding conditional execution, a concept akin to a conditional in computer software. His initial implementation consisted of a pair of matrices, the first one generated signals in the manner of the Whirlwind control store, while the second matrix selected which row of signals (the microprogram instruction word, as it were) to invoke on the next cycle. Conditionals were implemented by providing a way that a single line in the control store could choose from alternatives in the second matrix. This made the control signals conditional on the detected internal signal. Wilkes coined the term microprogramming to describe this feature and distinguish it from a simple control store.
Examples of microprogrammed systems
- In common with many other complex mechanical devices, Charles Babbage's analytical engine used banks of cams to control each operation. That is, it had a read-only control store. As such it deserves to be recognised as the first microprogrammed computer to be designed, even if it has not yet been realised in hardware.
- The EMIDEC 1100 reputedly used a hard-wired control store consisting of wires threaded through ferrite cores, known as 'the laces'.
- Most models of the IBM System/360 series were microprogrammed:
- The Model 25 was unique among System/360 models in using the top 16k bytes of core storage to hold the control storage for the microprogram. The 2025 used a 16-bit microarchitecture with seven control words (or microinstructions). At power up, or full system reset, the microcode was loaded from the card reader. The IBM 1410 emulation for this model was loaded this way.
- The Model 30, the slowest model in the line, used an 8-bit microarchitecture with only a few hardware registers; everything that the programmer saw was emulated by the microprogram. The microcode for this model was also held on special punched cards, which were stored inside the machine in a dedicated reader per card, called "CROS" units (Capacitor Read-Only Storage). A second CROS reader was installed for machines ordered with 1620 emulation.
- The Model 40 used 56-bit control words. The 2040 box implements both the System/360 main processor and the multiplex channel (the I/O processor). This model used "TROS" dedicated readers similar to "CROS" units, but with an inductive pickup (Transformer Read-only Store).
- The Model 50 had two internal datapaths which operated in parallel: a 32-bit datapath used for arithmetic operations, and an 8-bit data path used in some logical operations. The control store used 90-bit microinstructions.
- The Model 85 had separate instruction fetch (I-unit) and execution (E-unit) to provide high performance. The I-unit is hardware controlled. The E-unit is microprogrammed; the control words are 108 bits wide on a basic 360/85 and wider if an emulator feature is installed.
- The NCR 315 was microprogrammed with hand wired ferrite cores (a ROM) pulsed by a sequencer with conditional execution. Wires routed through the cores were enabled for various data and logic elements in the processor.
- The Digital Equipment Corporation PDP-11 processors, with the exception of the PDP-11/20, were microprogrammed.
- Many systems from Burroughs were microprogrammed:
- The B700 "microprocessor" executed application-level opcodes using sequences of 16-bit microinstructions stored in main memory; each of these was either a register-load operation or mapped to a single 56-bit "nanocode" instruction stored in read-only memory. This allowed comparatively simple hardware to act either as a mainframe peripheral controller or to be packaged as a standalone computer.
- The B1700 was implemented with radically different hardware including bit-addressable main memory but had a similar multi-layer organisation. The operating system would preload the interpreter for whatever language was required. These interpreters presented different virtual machines for COBOL, Fortran, etc.
- Microdata produced computers in which the microcode was accessible to the user; this allowed the creation of custom assembler level instructions. Microdata's Reality operating system design made extensive use of this capability.
- The Nintendo 64's Reality Co-Processor, which serves as the console's graphics processing unit and audio processor, utilized microcode; it is possible to implement new effects or tweak the processor to achieve the desired output. Some well-known examples of custom microcode include Factor 5's Nintendo 64 ports of the Indiana Jones and the Infernal Machine, Star Wars: Rogue Squadron and Star Wars: Battle for Naboo.
- The VU0 and VU1 vector units in the Sony PlayStation 2 are microprogrammable; in fact, VU1 was only accessible via microcode for the first several generations of the SDK.
Each microinstruction in a microprogram provides the bits that control the functional elements that internally compose a CPU. The advantage over a hard-wired CPU is that internal CPU control becomes a specialized form of a computer program. Microcode thus transforms a complex electronic design challenge (the control of a CPU) into a less complex programming challenge.
To take advantage of this, computers were divided into several parts:
A microsequencer picked the next word of the control store. A sequencer is mostly a counter, but usually also has some way to jump to a different part of the control store depending on some data, usually data from the instruction register and always some part of the control store. The simplest sequencer is just a register loaded from a few bits of the control store.
A register set is a fast memory containing the data of the central processing unit. It may include the program counter, stack pointer, and other numbers that are not easily accessible to the application programmer. Often the register set is a triple-ported register file; that is, two registers can be read, and a third written at the same time.
An arithmetic and logic unit performs calculations, usually addition, logical negation, a right shift, and logical AND. It often performs other functions, as well.
Together, these elements form an "execution unit". Most modern CPUs have several execution units. Even simple computers usually have one unit to read and write memory, and another to execute user code.
These elements could often be brought together as a single chip. This chip came in a fixed width that would form a "slice" through the execution unit. These were known as 'bit slice' chips. The AMD Am2900 family is one of the best known examples of bit slice elements.
The parts of the execution units and the execution units themselves are interconnected by a bundle of wires called a bus.
Programmers develop microprograms, using basic software tools. A microassembler allows a programmer to define the table of bits symbolically. A simulator program executes the bits in the same way as the electronics (hopefully), and allows much more freedom to debug the microprogram.
After the microprogram is finalized, and extensively tested, it is sometimes used as the input to a computer program that constructs logic to produce the same data. This program is similar to those used to optimize a programmable logic array. No known computer program can produce optimal logic, but even pretty good logic can vastly reduce the number of transistors from the number required for a ROM control store. This reduces the cost of producing, and the electricity consumed by, a CPU.
Microcode can be characterized as horizontal or vertical. This refers primarily to whether each microinstruction directly controls CPU elements (horizontal microcode), or requires subsequent decoding by combinatorial logic before doing so (vertical microcode). Consequently each horizontal microinstruction is wider (contains more bits) and occupies more storage space than a vertical microinstruction.
|This section does not cite any references or sources. (September 2014)|
Horizontal microcode is typically contained in a fairly wide control store; it is not uncommon for each word to be 108 bits or more. On each tick of a sequencer clock a microcode word is read, decoded, and used to control the functional elements that make up the CPU.
In a typical implementation a horizontal microprogram word comprises fairly tightly defined groups of bits. For example, one simple arrangement might be:
|register source A||register source B||destination register||arithmetic and logic unit operation||type of jump||jump address|
For this type of micromachine to implement a JUMP instruction with the address following the opcode, the microcode might require two clock ticks. The engineer designing it would write microassembler source code looking something like this:
# Any line starting with a number-sign is a comment # This is just a label, the ordinary way assemblers symbolically represent a # memory address. InstructionJUMP: # To prepare for the next instruction, the instruction-decode microcode has already # moved the program counter to the memory address register. This instruction fetches # the target address of the jump instruction from the memory word following the # jump opcode, by copying from the memory data register to the memory address register. # This gives the memory system two clock ticks to fetch the next # instruction to the memory data register for use by the instruction decode. # The sequencer instruction "next" means just add 1 to the control word address. MDR, NONE, MAR, COPY, NEXT, NONE # This places the address of the next instruction into the PC. # This gives the memory system a clock tick to finish the fetch started on the # previous microinstruction. # The sequencer instruction is to jump to the start of the instruction decode. MAR, 1, PC, ADD, JMP, InstructionDecode # The instruction decode is not shown, because it is usually a mess, very particular # to the exact processor being emulated. Even this example is simplified. # Many CPUs have several ways to calculate the address, rather than just fetching # it from the word following the op-code. Therefore, rather than just one # jump instruction, those CPUs have a family of related jump instructions.
For each tick it is common to find that only some portions of the CPU are used, with the remaining groups of bits in the microinstruction being no-ops. With careful design of hardware and microcode, this property can be exploited to parallelise operations that use different areas of the CPU; for example, in the case above, the ALU is not required during the first tick, so it could potentially be used to complete an earlier arithmetic instruction.
In vertical microcode, each microinstruction is encoded—that is, the bit fields may pass through intermediate combinatory logic that in turn generates the actual control signals for internal CPU elements (ALU, registers, etc.). In contrast, with horizontal microcode the bit fields themselves directly produce the control signals. Consequently vertical microcode requires smaller instruction lengths and less storage, but requires more time to decode, resulting in a slower CPU clock.
Some vertical microcode is just the assembly language of a simple conventional computer that is emulating a more complex computer. Some processors, such as DEC Alpha processors and the CMOS microprocessors on later IBM System/390 mainframes and z/Architecture mainframes, have PALcode (the term used on Alpha processors) or millicode (the term used on IBM mainframe microprocessors). This is a form of machine code, with access to special registers and other hardware resources not available to regular machine code, used to implement some instructions and other functions, such as page table walks on Alpha processors.
Another form of vertical microcode has two fields:
|field select||field value|
The "field select" selects which part of the CPU will be controlled by this word of the control store. The "field value" actually controls that part of the CPU. With this type of microcode, a designer explicitly chooses to make a slower CPU to save money by reducing the unused bits in the control store; however, the reduced complexity may increase the CPU's clock frequency, which lessens the effect of an increased number of cycles per instruction.
As transistors became cheaper, horizontal microcode came to dominate the design of CPUs using microcode, with vertical microcode being used less often.
Writable control stores
A few computers were built using "writable microcode". In this design, rather than storing the microcode in ROM or hard-wired logic, the microcode was stored in a RAM called a Writable Control Store or WCS. Such a computer is sometimes called a Writable Instruction Set Computer or WISC.
Many experimental prototype computers used writable control stores, and there were also commercial machines that used writable microcode, such as early Xerox workstations, the DEC VAX 8800 ("Nautilus") family, the Symbolics L- and G-machines, a number of IBM System/360 and System/370 implementations, some DEC PDP-10 machines, and the Data General Eclipse MV/8000.
Many more machines offered user-programmable writable control stores as an option (including the HP 2100, DEC PDP-11/60 and Varian Data Machines V-70 series minicomputers). The IBM System/370 included a facility called Initial-Microprogram Load (IML or IMPL) that could be invoked from the console, as part of Power On Reset (POR) or from another processor in a tightly coupled multiprocessor complex.
WCS offered several advantages including the ease of patching the microprogram and, for certain hardware generations, faster access than ROMs could provide. User-programmable WCS allowed the user to optimize the machine for specific purposes.
Several Intel CPUs in the x86 architecture family have writable microcode. This has allowed bugs in the Intel Core 2 microcode and Intel Xeon microcode to be fixed in software, rather than requiring the entire chip to be replaced.
Microcode versus VLIW and RISC
The design trend toward heavily microcoded processors with complex instructions began in the early 1960s and continued until roughly the mid-1980s. At that point the RISC design philosophy started becoming more prominent.
A CPU that uses microcode generally takes several clock cycles to execute a single instruction, one clock cycle for each step in the microprogram for that instruction. Some CISC processors include instructions that can take a very long time to execute. Such variations interfere with both interrupt latency and, what is far more important in modern systems, pipelining.
When designing a new processor, a hardwired control RISC has the following advantages over microcoded CISC:
- Programming has largely moved away from assembly level, so it's no longer worthwhile to provide complex instructions for productivity reasons.
- Simpler instruction sets allow direct execution by hardware, avoiding the performance penalty of microcoded execution.
- Analysis shows complex instructions are rarely used, hence the machine resources devoted to them are largely wasted.
- The machine resources devoted to rarely used complex instructions are better used for expediting performance of simpler, commonly used instructions.
- Complex microcoded instructions may require many clock cycles that vary, and are difficult to pipeline for increased performance.
It should be mentioned that there are counterpoints as well:
- The complex instructions in heavily microcoded implementations may not take much extra machine resources, except for microcode space. For instance, the same ALU is often used to calculate an effective address as well as computing the result from the actual operands (e.g. the original Z80, 8086, and others).
- The simpler non-RISC instructions (i.e. involving direct memory operands) are frequently used by modern compilers. Even immediate to stack (i.e. memory result) arithmetic operations are commonly employed. Although such memory operations, often with varying length encodings, are more difficult to pipeline, it is still fully feasible to do so - clearly exemplified by the i486, AMD K5, Cyrix 6x86, Motorola 68040, etc.
- Non-RISC instructions inherently perform more work per instruction (on average), and are also normally highly encoded, so they enable smaller overall size of the same program, and thus better use of limited cache memories.
- Modern CISC/RISC implementations, e.g. x86 designs, decode instructions into dynamically buffered micro-operations with instruction encodings similar to traditional fixed microcode. Ordinary static microcode is used as hardware assistance for complex multistep operations such as auto-repeating instructions and for transcendental functions in the floating point unit; it is also used for special purpose instructions (such as CPUID) and internal control and configuration purposes.
- The simpler instructions in CISC architectures are also directly executed in hardware in modern implementations.
Many RISC and VLIW processors are designed to execute every instruction (as long as it is in the cache) in a single cycle. This is very similar to the way CPUs with microcode execute one microinstruction per cycle. VLIW processors have instructions that behave similarly to very wide horizontal microcode, although typically without such fine-grained control over the hardware as provided by microcode. RISC instructions are sometimes similar to the narrow vertical microcode.
Microcoding remains popular in application-specific processors such as network processors.
- Manning, B.M.; Mitby, J.S; Nicholson, J.O. (November 1979). "Microprogrammed Processor Having PLA Control Store". IBM Technical Disclosure Bulletin 22 (6).
- Often denoted a ROM/PLA control store in the context of usage in a CPU; "J-11: DEC's fourth and last PDP-11 microprocessor design ... features ... ROM/PLA control store".
- "Microcode Update for SCSI Hard Disk"
- The ultimate extension of this were "Directly Executable High Level Language" designs. In these, each statement of a high level language such as PL/I would be entirely and directly executed by microcode, without compilation. The IBM Future Systems project and Data General Fountainhead Processor were examples of this.
- The MOS Technology 6502 is an example of a microprocessor using a PLA for instruction decode and sequencing. The PLA is visible in photomicrographs of the chip, such as those at the Visual6502.org project (across top edge of die photo), and the operation of the FPGA can be seen in the transistor-level simulation on that site.
- IBM System/360 Model 50 Functional Characteristics. IBM. 1967. p. 7. Retrieved 2011-09-20.
- Everett, R.R., and Swain, F.E. (1947). "Whirlwind I Computer Block Diagrams" (PDF). Report R-127. MIT Servomechanisms Laboratory. Retrieved 2006-06-21.
- "EMIDEC 1100 computer". Emidec.org.uk. Retrieved 2010-04-26.
- Daniel P. Siewiorek, C. Gordon Bell, Allen Newell (1982). Computer Structures: Principles and Examples. New York, NY: McGraw-Hill Book Company. ISBN 0-07-057302-6.
- "PALcode for Alpha Microprocessors System Design Guide". Digital Equipment Corporation. May 1996. Retrieved November 7, 2013.
- Robert Vaupel. High Availability and Scalability of Mainframe Environments using System z and z/OS as example. ISBN 978-3-7315-0022-3.
- Rogers, Bob (Sep–Oct 2012). "The What and Why of zEnterprise Millicode". IBM Systems Magazine.
- "Writable instruction set, stack oriented computers: The WISC Concept" article by Philip Koopman Jr. 1987
- Mark Smotherman. "CPSC 330 / The Soul of a New Machine". "4096 x 75-bit SRAM writeable control store: 74-bit microinstruction with 1 parity bit (18 fields)"
- IBM (September 1974), IBM System/370 Principles of Operation, Fourth Edition, pp. 98, 245, GA22-7000-4.
- IBM (June 1968), IBM System/360 Model 85 Functional Characteristics, SECOND EDITION, A22-6916-1.
- IBM (March 1969), IBM System/360 Special Feature Description 709/7090/7094 Compatability Feature for IBM System/360 Model 85, First Edition, GA27-2733-0.
- "Intel(R) 64 and IA-32 Architectures Software Developer’s Manual", Volume 3A: System Programming Guide, Part 1, chapter 9.11: "Microcode update facilities", December 2009.
- Smith, Richard E. (1988). "A Historical Overview of Computer Architecture". Annals of the History of Computing 10 (4): 277–303. doi:10.1109/MAHC.1988.10039. Retrieved 2006-06-21.
- Smotherman, Mark (2005). "A Brief History of Microprogramming". Retrieved 2006-07-30.
- Wilkes, M.V. (1986). "The Genesis of Microprogramming". Annals of the History of Computing 8 (2): 116–126. doi:10.1109/MAHC.1986.10035. Retrieved 2006-08-07.
- Wilkes, M.V., and Stringer, J. B. (April 1953). "Microprogramming and the Design of the Control Circuits in an Electronic Digital Computer". Proc. Cambridge Phil. Soc 49 (pt. 2): 230–238. doi:10.1017/S0305004100028322. Retrieved 2006-08-23.
- Husson, S.S (1970). Microprogramming Principles and Practices. Prentice-Hall. ISBN 0-13-581454-5.
- Tucker, S. G., "Microprogram control for SYSTEM/360" IBM Systems Journal, Volume 6, Number 4, pp. 222–241 (1967)
- "Mikrocodesimulator MikroSim 2010". 0/1-SimWare. Retrieved 2010-10-03.
- Writable Instruction Set Computer
- Capacitor Read-only Store
- Transformer Read-only Store
- A Brief History of Microprogramming
- Intel processor microcode security update (fixes the issues when running 32-bit virtual machines in PAE mode)
|
Summary of the Townshend Acts
The acts bear the name of Charles Townshend, the British Chancellor of the Exchequer, who proposed them. The Townshend Acts were based on the premise that Parliament had the authority to govern the colonies as it saw fit, including laws that controlled taxes, courts, and government. Although colonists held to the slogan “No Taxation Without Representation,” Parliament gave itself the authority to govern the colonies by passing the Declaratory Act in 1766.
The general purpose of the acts was to establish a revenue flow from the colonies to Great Britain (Revenue Act) and to tighten Britain’s control over colonial governments (Commissioners of Customs Act and Vice-Admiralty Court Act). Neither tactic was very effective. Colonists responded to the acts by boycotting British goods and the Massachusetts legislature drafted a Circular Letter asking other colonial legislatures to join a resistance movement.
Enforcement of the Revenue Act in Boston required the deployment of British troops which led to the Boston Massacre in 1770. Ironically, Parliament rescinded most of the Revenue Act on March 5, 1770, the same day as the Boston Massacre. However, Britain did retain the importation duty imposed on tea as a symbol of Parliament’s right to tax Americans.
Townshend Acts — Quick Facts
Passage of the Acts
- The Townshend Acts were a series of acts passed by the British Parliament in 1767 and 1768.
- The Townshend Acts bear the name of Charles Townshend, the British Chancellor of the Exchequer, who proposed them.
Purpose of the Acts
The general purpose of the Townshend Acts was to establish a revenue flow from the colonies to Great Britain and to tighten Britain’s control over colonial governments.
Details About the Revenue Act of 1767
- The Revenue Act of 1767 (aka the Townshend Revenue Act) imposed duties on items such as paint, paper, glass, lead, and tea imported into the colonies.
- Besides raising money to help the struggling British economy and defraying the expenses of administering the colonies, the revenue derived from the duties was to be used to pay the salaries of royal officials, including governors, in the colonies, thus making those officials more answerable to the crown.
Details About the Commissioners of Customs Act
- The Commissioners of Customs Act created a new Board of Customs Commissioners to enforce compliance with the new tax policy and to curtail smuggling in the colonies.
- Newly created customs officials were awarded bonuses for the conviction of American smugglers.
- The new customs board was headquartered in Boston and became an immediate target for local radicals who were leading the movement for unified colonial resistance.
Details About the Vice-Admiralty Court Act
- The Vice-Admiralty Court Act of 1768 created Admiralty Courts authorized to try and convict accused smugglers without a jury.
Details About the New York Restraining Act
- The New York Restraining Act, enacted at Townshend’s urging, suspended the New York legislature to punish the colony for its failure to comply with the unpopular Quartering Act imposed in 1765.
Details About the Indemnity Act
- The Indemnity Act repealed the inland duties on tea in England and permitted it to be exported to the colonies free of all British taxes.
Colonial Reaction to the Townshend Acts
- Boston passed a Non-Importation Agreement on August 1, 1768, in protest of the Townshend Acts.
- Parliament dissolved the Massachusetts legislature in 1768 for public resistance to the Townshend Acts.
- Enforcement of the Revenue Act in Boston required the deployment of British troops which led to the Boston Massacre in 1770.
- The Townshend Acts prompted Philadelphia lawyer John Dickinson to publish his famous Letters from a Farmer in Pennsylvania, which argued that Parliament had the right to control imperial commerce but did not have the right to tax the colonies.
Repeal of the Townshend Acts
Parliament rescinded most of the Revenue Act on March 5, 1770, the same day the Boston Massacre took place.
|
- Trending Categories
- Data Structure
- Operating System
- C Programming
- Social Studies
- Fashion Studies
- Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
What is Dijikstra Algorithm?
The shortest path routing algorithm is an extensively set up non-adaptive routing algorithm that routes a packet dependent on Dijkstra's important shortest path algorithm. In this methodology, packets that appear at a node take one of the shortest paths to a specific destination on the system.
The shortcoming of this manner is that it does not consider the modification in the network traffic powerfully. For instance, queuing brings congestion in the middle nodes developing in packets being deferred before arriving at the destination. In this approach, it would be beneficial to advance packets to choose routes that cannot be the shortest path but can finally result in quicker transfer time.
The Shortest Path Routing (SPR) algorithm evaluates the routes depending on the cost metric represented in various matrices through the network to assess its routes.
Let V be a set of all the edges of the digraph and C[S, V] be the matrices of the least cost (of the shortest path) to any node to be evaluated. Initially, each edge (node), before adding its routes, sets the shortest path's cost to any other edge as infinity, i.e. . Suppose there are n edges in the digraph, each edge begins evaluating the cost of the shortest paths to different nodes by selecting its neighbour x it is V – S with the lowest value of W (weight) from S to x. It evaluates the routes for each edge (S, x) which are neighbours of S, using the formula.
D [V] = min (D [V], D [V] + C(S, V)
C (S, V) are the costs of vertices connected to S. This procedure is extended until all the digraph edges are visited. When the algorithm is interrupted, all the nodes in the networks have to find each node's shortest path.
Steps of Algorithm
The source node is initialized and represented by a black circle.
The neighbouring node's cost is determined and relabelled as a source node.
It decided the smallest label by examining the adjacent node to make it permanent and take it as the source.
As shown in the example, we want to get G from A. So the steps are as follows −
Now taken A as the first source and darken. Then C, then F and G is nearest to F and reaches the destination total weight is 7 from A to G and shortest path is (A, C, F, G) other than ABEG.
- What is division algorithm ?
- What is Parallel Algorithm?
- What is RIPPER Algorithm?
- What is Backpropagation Algorithm?
- What is Apriori Algorithm?
- What is Euclid's division algorithm?
- What is Congestion Control Algorithm?
- What is Hoeffding Tree Algorithm?
- What is Distance Vector Routing Algorithm?
- What is the Blowfish encryption algorithm?
- What is the CART Pruning Algorithm?
- What is the C5 Pruning Algorithm?
- What is an Agglomerative Clustering Algorithm?
- What is algorithm for computing the CRC?
- What is a Non-Adaptive Routing Algorithm?
|
Apr. 10, 2009 Two places on opposite sides of Earth may hold the secret to how the moon was born. NASA's twin Solar Terrestrial Relations Observatory (STEREO) spacecraft are about to enter these zones, known as the L4 and L5 Lagrangian points, each centered about 93 million miles away along Earth's orbit.
As rare as free parking in New York City, L4 and L5 are among the special points in our solar system around which spacecraft and other objects can loiter. They are where the gravitational pull of a nearby planet or the sun balances the forces from the object's orbital motion. Such points closer to Earth are sometimes used as spaceship "parking lots", like the L1 point a million miles away in the direction of the sun. They are officially called Libration points or Lagrangian points after Joseph-Louis Lagrange, an Italian-French mathematician who helped discover them.
L4 and L5 are where an object's motion can be balanced by the combined gravity of the sun and Earth. "These places may hold small asteroids, which could be leftovers from a Mars-sized planet that formed billions of years ago," said Michael Kaiser, Project Scientist for STEREO at NASA's Goddard Space Flight Center in Greenbelt, Md. "According to Edward Belbruno and Richard Gott at Princeton University, about 4.5 billion years ago when the planets were still growing, this hypothetical world, called Theia, may have been nudged out of L4 or L5 by the increasing gravity of the other developing planets like Venus and sent on a collision course with Earth. The resulting impact blasted the outer layers of Theia and Earth into orbit, which eventually coalesced under their own gravity to form the moon."
This theory is a modification of the "giant impact" theory of the moon's origin, which has become the dominant theory because it explains some puzzling properties of the moon, such as its relatively small iron core. According to giant impact, at the time of the collision, the two planets were large enough to be molten, so heavier elements, like iron, sank to their centers to form their cores.
The impact stripped away the outer layers of the two worlds, which contained mostly lighter elements, like silicon. Since the moon formed from this material, it is iron-poor.
STEREO will look for asteroids with a wide-field-of-view telescope that's part of the Sun Earth Connection Coronal and Heliospheric Investigation instrument. Any asteroid will probably appear as just a point of light. Like a picky person circling the mall for the perfect parking space, the asteroids orbit the L4 or L5 points. The team will be able to tell if a dot is an asteroid because it will shift its position against stars in the background as it moves in its orbit. The team is inviting the public to participate in the search by viewing the data and filing a report at: >
Kaiser said, "If we discover the asteroids have the same composition as the Earth and moon, it will support Belbruno and Gott's version of the giant impact theory. The asteroids themselves could well be left-over from the formation of the solar system. Also, the L4/L5 regions might be the home of future Earth-impacting asteroids."
Analyses of lunar rocks brought to Earth by the Apollo missions reveal that they have the same isotopes (heavier versions of an element) as terrestrial rocks. Scientists believe that the sun and the worlds of our solar system formed out of a cloud of gas and dust that collapsed under its gravity. The composition of this primordial cloud changed with temperature. Since the temperature decreased with distance from the sun, whatever created the moon must have formed in the same orbital location as Earth in order for them to have the same isotope composition.
In a planetary version of "the rich get richer", Earth's gravity should have swept up most of the material in its orbit, leaving too little to create our large moon or another planet like Theia. "However, computer models by Belbruno and Gott indicate that Theia could have grown large enough to produce the moon if it formed in the L4 or L5 regions, where the balance of forces allowed enough material to accumulate," said Kaiser.
The STEREO spacecraft are designed to give 3D views of space weather by observing the sun from two points of view and combining the images in the same way your eyes work together to give a 3D view of the world. STEREO "A" is moving slightly ahead of Earth and will pass through L4, and STEREO "B" is moving slightly behind Earth and will pass through L5. "Taking the time to observe L4 and L5 is kind of cool because it's free. We're going through there anyway -- we're moving too fast to get stuck," said Kaiser. "In fact, after we pass through these regions, we will see them all the time because our instruments will be looking back through them to observe the sun – they will just happen to be in our field of view."
Although L4 and L5 are just points mathematically, their region of influence is huge – about 50 million miles along the direction of Earth's orbit, and 10 million miles along the direction of the sun. It will take several months for STEREO to pass through them, with STEREO A making its closest pass to L4 in September, and STEREO B making its closest pass to L5 in October.
"L4 or L5 are excellent places to observe space weather. With both the sun and Earth in view, we could track solar storms and watch them evolve as they move toward Earth. Also, since we could see sides of the sun not visible from Earth, we would have a few days warning before stormy regions on the solar surface rotate to become directed at Earth," said Kaiser.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
|
Summary: To print without the newline character in Python 3, set the
end argument in the
print() function to the empty string or the single whitespace character. This ensures that there won’t be a newline in the standard output after each execution of
print(). Alternatively, unpack the iterable into the
print() function to avoid the use of multiple
Let’s go over this problem and these two solutions step-by-step.
Problem: How to use the
print() function without printing an implicit newline character to the Python shell?
Example: Say, you want to use the
print() function within a for loop—but you don’t want to see multiple newlines between the printed output:
for i in range(1,5): print(i)
The default standard output is the following:
1 2 3 4
But you want to get the following output in a single line of Python code.
1 2 3 4
How to accomplish this in Python 3?
Solution: I’ll give you the quick solution in an interactive Python shell here:
By reading on, you’ll understand how this works and become a better coder in the process.
Let’s have a quick recap of the Python
Python Print Function – Quick Start Guide
There are two little-used arguments of the print function in Python.
- The argument
sepindicates the separator which is printed between the objects.
- The argument
enddefines what comes at the end of each line.
Related Article: Python Print Function [And Its SECRET Separator & End Arguments]
Consider the following example:
a = 'hello' b = 'world' print(a, b, sep=' Python ', end='!')
Try it yourself in our interactive code shell:
Exercise: Click “Run” to execute the shell and see the output. What has changed?
Solution 1: End Argument of Print Function
Having studied this short guide, you can now see how to solve the problem:
To print the output of the for loop to a single line, you need to define the end argument of the print function to be something different than the default newline character. In our example, we want to use the empty space after each string we pass into the
print() function. Here’s how you accomplish this:
for i in range(1,5): print(i, end=' ')
The shell output concentrates on a single line:
1 2 3 4
By defining the end argument, you can customize the output to your problem.
Solution 2: Unpacking
However, there’s an even more advanced solution that’s more concise and more Pythonic. It makes use of the unpacking feature in Python.
print(*range(1,5)) # 1 2 3 4
The asterisk prefix
* before the
range(1,5) unpacks all values in the
range iterable into the print function. This way, it becomes similar to the function execution
print(1, 2, 3, 4) with comma-separated arguments. You can use an arbitrary number of arguments in the
Per default, Python will print these arguments with an empty space in between. If you want to customize this separator string, you can use the
sep argument as you’ve learned above.
How to Print a List?
Do you want to print a list to the shell? Just follow these simple steps:
- Pass a list as an input to the
print()function in Python.
- Use the asterisk operator
*in front of the list to “unpack” the list into the print function.
- Use the
separgument to define how to separate two list elements visually.
Here’s the code:
# Create the Python List lst = [1, 2, 3, 4, 5] # Use three underscores as separator print(*lst, sep='___') # 1___2___3___4___5 # Use an arrow as separator print(*lst, sep='-->') # 1-->2-->3-->4-->5
Try It Yourself in Our Interactive Code Shell:
This is the best and most Pythonic way to print a Python list. If you still want to learn about alternatives—and improve your Python skills in the process of doing so—read the following tutorial!
Related Article: Print a Python List Beautifully [Click & Run Code]
Where to Go From Here?
Enough theory, let’s get some practice!
To become successful in coding, you need to get out there and solve real problems for real people. That’s how you can become a six-figure earner easily. And that’s how you polish the skills you really need in practice. After all, what’s the use of learning theory that nobody ever needs?
Practice projects is how you sharpen your saw in coding!
Do you want to become a code master by focusing on practical code projects that actually earn you money and solve problems for people?
Then become a Python freelance developer! It’s the best way of approaching the task of improving your Python skills—even if you are a complete beginner.
Join my free webinar “How to Build Your High-Income Skill Python” and watch how I grew my coding business online and how you can, too—from the comfort of your own home.
|
Cos 60 Degrees
In trigonometry, Sine, Cosine and Tangent are the three major or primary ratios, which are used to find the angles and length of the sides of the triangle. Before determining Cos 60 degrees whose value is equal to 1/2, let us know the importance of Cosine function in trigonometry.
Cosine function defines a relation between the adjacent side and the hypotenuse of a right triangle with respect to the angle formed between the adjacent side and the hypotenuse. In other words, the Cosine of angle α is equal to the ratio of the adjacent side (also called as a base) and hypotenuse of a right-angled triangle.
Also, find a few important values of other trigonometric ratios:
The trigonometric functions, sin, cos and tan for an angle are the primary functions. The value for cos 60 degrees and other trigonometry ratios for all the degrees 0°, 30°, 45°, 90°, 180° are generally used in trigonometry equations. These trigonometric values are easy to memorize with the help trigonometry table.
Cos 60 Degree Value
In a right-angled triangle, the cosine of ∠α is a ratio of the length of the adjacent side (base) to the ∠α and its hypotenuse, where ∠α is the angle formed between the adjacent side and the hypotenuse.
Cosine α = Adjacent Side / Hypotenuse
Cos α = AC / AB
Cos α = b / h
Now, to find the value of cos 60 degrees, let us consider, an equilateral triangle ABC as given below:
Here, AB = BC = AC and AD is perpendicular bisecting BC into two equal parts.
As we know, cos B = BD/AB
Let us consider the length of each side as 2 units, such as AB = BC = AC = 2 units and BD = CD = 1 unit.
Therefore, the value of cos 60° = BD/AB = ½
In the same way, we can write the value of sin 60° and tan 60° by evaluating the required sides.
In right triangle ABD, by Pythagoras theorem:
AB2 = AD2+ BD2
22 = AD2 + 12
AD2 = 22 -12
AD2 = 4 – 1
AD2 = 3
AD = √3
Now, we have got all the sides of triangle ABD.
Sin 60° = AD/AB = √3/2
Tan 60° = AD/BD = √3 / 1 = √3
We can also write the value of cos 60 degrees in decimal form as:
cos 60° = 1/2 = 0.5
Also, we can write the values of sine, cosine and tangent with respect to all the degrees in a table.
Let us draw a table with respect to degrees and radians for sine, cosine and tangent functions.
|Angle in Degrees||0||30||45||60||90||180||270||360|
|Angle in Radians||0||π/6||π/4||π/3||π/2||π||3π/2||2π|
Unit Circle in Trigonometry
It is possible to give the values of trigonometric ratios with respect to radians equivalent to degrees, in case of a unit circle, whose radius is equal to one. The radian for a unit circle is denoted by π. In the below figure, the angle values are represented in degrees instead of radians.
You have learned about cos 60 degrees value along with other degree’s values here now. Also, the derived values for sin and tan with respect to degrees and radians. In the same way, we can find the values for other trigonometric ratios like secant, cosecant and cotangent.
Let us see some example where we can use the value of cos 60 degrees.
Example 1: Find the value of cos 60° + sin 30°.
cos 60° + sin 30°
= cos 60° + sin (90° – 60°)
= cos 60° + cos 60°
= (1/2) + (1/2)
= (1 + 1)/2
Alternatively, cos 60° + sin 30° = (1/2) + (1/2) = (1 + 1)/2 = 2/2 = 1
Example 2: Calculate: 2 sin 60° – 4 cos 60°
We know that, sin 60° = √3/2 and cos 60° = 1/2
Thus, 2 sin 60° – 4 cos 60° = 2(√3/2) – 4(1/2)
= √3 – 2
Learn more about trigonometric ratios/identities/functions at BYJU’S and download BYJU’S – The Learning App for a better experience.
|
|Part of a series on|
| Paleontology Portal |
A fossil (from Classical Latin fossilis; literally, "obtained by digging")is any preserved remains, impression, or trace of any once-living thing from a past geological age. Examples include bones, shells, exoskeletons, stone imprints of animals or microbes, objects preserved in amber, hair, petrified wood, oil, coal, and DNA remnants. The totality of fossils is known as the fossil record.
Paleontology is the study of fossils: their age, method of formation, and evolutionary significance. Specimens are usually considered to be fossils if they are over 10,000 years old. The oldest fossils are around 3.48 billion years old to 4.1 billion years old. The observation in the 19th century that certain fossils were associated with certain rock strata led to the recognition of a geological timescale and the relative ages of different fossils. The development of radiometric dating techniques in the early 20th century allowed scientists to quantitatively measure the absolute ages of rocks and the fossils they host.
There are many processes that lead to fossilization, including permineralization, casts and molds, authigenic mineralization, replacement and recrystallization, adpression, carbonization, and bioimmuration.
Fossils vary in size from one-micrometre (1 µm) bacteria to dinosaurs and trees, many meters long and weighing many tons. A fossil normally preserves only a portion of the deceased organism, usually that portion that was partially mineralized during life, such as the bones and teeth of vertebrates, or the chitinous or calcareous exoskeletons of invertebrates. Fossils may also consist of the marks left behind by the organism while it was alive, such as animal tracks or feces (coprolites). These types of fossil are called trace fossils or ichnofossils, as opposed to body fossils. Some fossils are biochemical and are called chemofossils or biosignatures.
This section needs additional citations for verification . (August 2012) (Learn how and when to remove this template message)
The process of fossilization varies according to tissue type and external conditions.
Permineralization is a process of fossilization that occurs when an organism is buried. The empty spaces within an organism (spaces filled with liquid or gas during life) become filled with mineral-rich groundwater. Minerals precipitate from the groundwater, occupying the empty spaces. This process can occur in very small spaces, such as within the cell wall of a plant cell. Small scale permineralization can produce very detailed fossils. For permineralization to occur, the organism must become covered by sediment soon after death, otherwise decay commences. The degree to which the remains are decayed when covered determines the later details of the fossil. Some fossils consist only of skeletal remains or teeth; other fossils contain traces of skin, feathers or even soft tissues. This is a form of diagenesis.
In some cases, the original remains of the organism completely dissolve or are otherwise destroyed. The remaining organism-shaped hole in the rock is called an external mold. If this hole is later filled with other minerals, it is a cast. An endocast, or internal mold, is formed when sediments or minerals fill the internal cavity of an organism, such as the inside of a bivalve or snail or the hollow of a skull.
This is a special form of cast and mold formation. If the chemistry is right, the organism (or fragment of organism) can act as a nucleus for the precipitation of minerals such as siderite, resulting in a nodule forming around it. If this happens rapidly before significant decay to the organic tissue, very fine three-dimensional morphological detail can be preserved. Nodules from the Carboniferous Mazon Creek fossil beds of Illinois, USA, are among the best documented examples of such mineralization.
Replacement occurs when the shell, bone or other tissue is replaced with another mineral. In some cases mineral replacement of the original shell occurs so gradually and at such fine scales that microstructural features are preserved despite the total loss of original material. A shell is said to be recrystallized when the original skeletal compounds are still present but in a different crystal form, as from aragonite to calcite.
Compression fossils, such as those of fossil ferns, are the result of chemical reduction of the complex organic molecules composing the organism's tissues. In this case the fossil consists of original material, albeit in a geochemically altered state. This chemical change is an expression of diagenesis. Often what remains is a carbonaceous film known as a phytoleim, in which case the fossil is known as a compression. Often, however, the phytoleim is lost and all that remains is an impression of the organism in the rock—an impression fossil. In many cases, however, compressions and impressions occur together. For instance, when the rock is broken open, the phytoleim will often be attached to one part (compression), whereas the counterpart will just be an impression. For this reason, one term covers the two modes of preservation: adpression.
Because of their antiquity, an unexpected exception to the alteration of an organism's tissues by chemical reduction of the complex organic molecules during fossilization has been the discovery of soft tissue in dinosaur fossils, including blood vessels, and the isolation of proteins and evidence for DNA fragments.In 2014, Mary Schweitzer and her colleagues reported the presence of iron particles (goethite-aFeO(OH)) associated with soft tissues recovered from dinosaur fossils. Based on various experiments that studied the interaction of iron in haemoglobin with blood vessel tissue they proposed that solution hypoxia coupled with iron chelation enhances the stability and preservation of soft tissue and provides the basis for an explanation for the unforeseen preservation of fossil soft tissues. However, a slightly older study based on eight taxa ranging in time from the Devonian to the Jurassic found that reasonably well-preserved fibrils that probably represent collagen were preserved in all these fossils and that the quality of preservation depended mostly on the arrangement of the collagen fibers, with tight packing favoring good preservation. There seemed to be no correlation between geological age and quality of preservation, within that timeframe.
Fossils that are carbonized or coalified consist of the organic remains which have been reduced primarily to the chemical element carbon. Carbonized fossils consist of a thin film which forms a silhouette of the original organism, and the original organic remains were typically soft tissues. Coalified fossils consist primarily of coal, and the original organic remains were typically woody in composition.
Bioimmuration occurs when a skeletal organism overgrows or otherwise subsumes another organism, preserving the latter, or an impression of it, within the skeleton.Usually it is a sessile skeletal organism, such as a bryozoan or an oyster, which grows along a substrate, covering other sessile sclerobionts. Sometimes the bioimmured organism is soft-bodied and is then preserved in negative relief as a kind of external mold. There are also cases where an organism settles on top of a living skeletal organism that grows upwards, preserving the settler in its skeleton. Bioimmuration is known in the fossil record from the Ordovician to the Recent.
Paleontology seeks to map out how life evolved across geologic time. A substantial hurdle is the difficulty of working out fossil ages. Beds that preserve fossils typically lack the radioactive elements needed for radiometric dating. This technique is our only means of giving rocks greater than about 50 million years old an absolute age, and can be accurate to within 0.5% or better. Although radiometric dating requires careful laboratory work, its basic principle is simple: the rates at which various radioactive elements decay are known, and so the ratio of the radioactive element to its decay products shows how long ago the radioactive element was incorporated into the rock. Radioactive elements are common only in rocks with a volcanic origin, and so the only fossil-bearing rocks that can be dated radiometrically are volcanic ash layers, which may provide termini for the intervening sediments.
Consequently, palaeontologists rely on stratigraphy to date fossils. Stratigraphy is the science of deciphering the "layer-cake" that is the sedimentary record. million years ago and the calculated "family tree" says A was an ancestor of B and C, then A must have evolved earlier.Rocks normally form relatively horizontal layers, with each layer younger than the one underneath it. If a fossil is found between two layers whose ages are known, the fossil's age is claimed to lie between the two known ages. Because rock sequences are not continuous, but may be broken up by faults or periods of erosion, it is very difficult to match up rock beds that are not directly adjacent. However, fossils of species that survived for a relatively short time can be used to match isolated rocks: this technique is called biostratigraphy. For instance, the conodont Eoplacognathus pseudoplanus has a short range in the Middle Ordovician period. If rocks of unknown age have traces of E. pseudoplanus, they have a mid-Ordovician age. Such index fossils must be distinctive, be globally distributed and occupy a short time range to be useful. Misleading results are produced if the index fossils are incorrectly dated. Stratigraphy and biostratigraphy can in general provide only relative dating (A was before B), which is often sufficient for studying evolution. However, this is difficult for some time periods, because of the problems involved in matching rocks of the same age across continents. Family-tree relationships also help to narrow down the date when lineages first appeared. For instance, if fossils of B or C date to X
It is also possible to estimate how long ago two living clades diverged, in other words approximately how long ago their last common ancestor must have lived, by assuming that DNA mutations accumulate at a constant rate. These "molecular clocks", however, are fallible, and provide only approximate timing: for example, they are not sufficiently precise and reliable for estimating when the groups that feature in the Cambrian explosion first evolved,and estimates produced by different techniques may vary by a factor of two.
Organisms are only rarely preserved as fossils in the best of circumstances, and only a fraction of such fossils have been discovered. This is illustrated by the fact that the number of species known through the fossil record is less than 5% of the number of known living species, suggesting that the number of species known through fossils must be far less than 1% of all the species that have ever lived.Because of the specialized and rare circumstances required for a biological structure to fossilize, only a small percentage of life-forms can be expected to be represented in discoveries, and each discovery represents only a snapshot of the process of evolution. The transition itself can only be illustrated and corroborated by transitional fossils, which will never demonstrate an exact half-way point.
The fossil record is strongly biased toward organisms with hard-parts, leaving most groups of soft-bodied organisms with little to no role.It is replete with the mollusks, the vertebrates, the echinoderms, the brachiopods and some groups of arthropods.
Fossil sites with exceptional preservation—sometimes including preserved soft tissues—are known as Lagerstätten - German for "storage places". These formations may have resulted from carcass burial in an anoxic environment with minimal bacteria, thus slowing decomposition. Lagerstätten span geological time from the Cambrian period to the present. Worldwide, some of the best examples of near-perfect fossilization are the Cambrian Maotianshan shales and Burgess Shale, the Devonian Hunsrück Slates, the Jurassic Solnhofen limestone, and the Carboniferous Mazon Creek localities.
Stromatolites are layered accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains by biofilms of microorganisms, especially cyanobacteria.Stromatolites provide some of the most ancient fossil records of life on Earth, dating back more than 3.5 billion years ago.
Stromatolites were much more abundant in Precambrian times. While older, Archean fossil remains are presumed to be colonies of cyanobacteria, younger (that is, Proterozoic) fossils may be primordial forms of the eukaryote chlorophytes (that is, green algae). One genus of stromatolite very common in the geologic record is Collenia . The earliest stromatolite of confirmed microbial origin dates to 2.724 billion years ago.
A 2009 discovery provides strong evidence of microbial stromatolites extending as far back as 3.45 billion years ago.
Stromatolites are a major constituent of the fossil record for life's first 3.5 billion years, peaking about 1.25 billion years ago.They subsequently declined in abundance and diversity, which by the start of the Cambrian had fallen to 20% of their peak. The most widely supported explanation is that stromatolite builders fell victims to grazing creatures (the Cambrian substrate revolution), implying that sufficiently complex organisms were common over 1 billion years ago.
The connection between grazer and stromatolite abundance is well documented in the younger Ordovician evolutionary radiation; stromatolite abundance also increased after the end-Ordovician and end-Permian extinctions decimated marine animals, falling back to earlier levels as marine animals recovered.Fluctuations in metazoan population and diversity may not have been the only factor in the reduction in stromatolite abundance. Factors such as the chemistry of the environment may have been responsible for changes.
While prokaryotic cyanobacteria themselves reproduce asexually through cell division, they were instrumental in priming the environment for the evolutionary development of more complex eukaryotic organisms. Cyanobacteria (as well as extremophile Gammaproteobacteria) are thought to be largely responsible for increasing the amount of oxygen in the primeval earth's atmosphere through their continuing photosynthesis. Cyanobacteria use water, carbon dioxide and sunlight to create their food. A layer of mucus often forms over mats of cyanobacterial cells. In modern microbial mats, debris from the surrounding habitat can become trapped within the mucus, which can be cemented by the calcium carbonate to grow thin laminations of limestone. These laminations can accrete over time, resulting in the banded pattern common to stromatolites. The domal morphology of biological stromatolites is the result of the vertical growth necessary for the continued infiltration of sunlight to the organisms for photosynthesis. Layered spherical growth structures termed oncolites are similar to stromatolites and are also known from the fossil record. Thrombolites are poorly laminated or non-laminated clotted structures formed by cyanobacteria common in the fossil record and in modern sediments.
The Zebra River Canyon area of the Kubis platform in the deeply dissected Zaris Mountains of southwestern Namibia provides an extremely well exposed example of the thrombolite-stromatolite-metazoan reefs that developed during the Proterozoic period, the stromatolites here being better developed in updip locations under conditions of higher current velocities and greater sediment influx.
Index fossils (also known as guide fossils, indicator fossils or zone fossils) are fossils used to define and identify geologic periods (or faunal stages). They work on the premise that, although different sediments may look different depending on the conditions under which they were deposited, they may include the remains of the same species of fossil. The shorter the species' time range, the more precisely different sediments can be correlated, and so rapidly evolving species' fossils are particularly valuable. The best index fossils are common, easy to identify at species level and have a broad distribution—otherwise the likelihood of finding and recognizing one in the two sediments is poor.
Trace fossils consist mainly of tracks and burrows, but also include coprolites (fossil feces) and marks left by feeding.Trace fossils are particularly significant because they represent a data source that is not limited to animals with easily fossilized hard parts, and they reflect animal behaviours. Many traces date from significantly earlier than the body fossils of animals that are thought to have been capable of making them. Whilst exact assignment of trace fossils to their makers is generally impossible, traces may for example provide the earliest physical evidence of the appearance of moderately complex animals (comparable to earthworms).
Coprolites are classified as trace fossils as opposed to body fossils, as they give evidence for the animal's behaviour (in this case, diet) rather than morphology. They were first described by William Buckland in 1829. Prior to this they were known as "fossil fir cones" and "bezoar stones." They serve a valuable purpose in paleontology because they provide direct evidence of the predation and diet of extinct organisms.Coprolites may range in size from a few millimetres to over 60 centimetres.
A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group.This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation.
Microfossil is a descriptive term applied to fossilized plants and animals whose size is just at or below the level at which the fossil can be analyzed by the naked eye. A commonly applied cutoff point between "micro" and "macro" fossils is 1 mm. Microfossils may either be complete (or near-complete) organisms in themselves (such as the marine plankters foraminifera and coccolithophores) or component parts (such as small teeth or spores) of larger animals or plants. Microfossils are of critical importance as a reservoir of paleoclimate information, and are also commonly used by biostratigraphers to assist in the correlation of rock units.
Fossil resin (colloquially called amber) is a natural polymer found in many types of strata throughout the world, even the Arctic. The oldest fossil resin dates to the Triassic, though most dates to the Cenozoic. The excretion of the resin by certain plants is thought to be an evolutionary adaptation for protection from insects and to seal wounds. Fossil resin often contains other fossils called inclusions that were captured by the sticky resin. These include bacteria, fungi, other plants, and animals. Animal inclusions are usually small invertebrates, predominantly arthropods such as insects and spiders, and only extremely rarely a vertebrate such as a small lizard. Preservation of inclusions can be exquisite, including small fragments of DNA.
A derived, reworked or remanié fossil is a fossil found in rock that accumulated significantly later than when the fossilized animal or plant died.Reworked fossils are created by erosion exhuming (freeing) fossils from the rock formation in which they were originally deposited and their redeposition in an younger sedimentary deposit.
Fossil wood is wood that is preserved in the fossil record. Wood is usually the part of a plant that is best preserved (and most easily found). Fossil wood may or may not be petrified. The fossil wood may be the only part of the plant that has been preserved:therefore such wood may get a special kind of botanical name. This will usually include "xylon" and a term indicating its presumed affinity, such as Araucarioxylon (wood of Araucaria or some related genus), Palmoxylon (wood of an indeterminate palm), or Castanoxylon (wood of an indeterminate chinkapin).
The term subfossil can be used to refer to remains, such as bones, nests, or defecations, whose fossilization process is not complete, either because the length of time since the animal involved was living is too short (less than 10,000 years) or because the conditions in which the remains were buried were not optimal for fossilization. Subfossils are often found in caves or other shelters where they can be preserved for thousands of years.The main importance of subfossil vs. fossil remains is that the former contain organic material, which can be used for radiocarbon dating or extraction and sequencing of DNA, protein, or other biomolecules. Additionally, isotope ratios can provide much information about the ecological conditions under which extinct animals lived. Subfossils are useful for studying the evolutionary history of an environment and can be important to studies in paleoclimatology.
Subfossils are often found in depositionary environments, such as lake sediments, oceanic sediments, and soils. Once deposited, physical and chemical weathering can alter the state of preservation.
Chemical fossils, or chemofossils, are chemicals found in rocks and fossil fuels (petroleum, coal, and natural gas) that provide an organic signature for ancient life. Molecular fossils and isotope ratios represent two types of chemical fossils.The oldest traces of life on Earth are fossils of this type, including carbon isotope anomalies found in zircons that imply the existence of life as early as 4.1 billion years ago.
It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on the planet Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions.
On 24 January 2014, NASA reported that current studies by the Curiosity and Opportunity rovers on Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable.The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective.
Pseudofossils are visual patterns in rocks that are produced by geologic processes rather than biologic processes. They can easily be mistaken for real fossils. Some pseudofossils, such as geological dendrite crystals, are formed by naturally occurring fissures in the rock that get filled up by percolating minerals. Other types of pseudofossils are kidney ore (round shapes in iron ore) and moss agates, which look like moss or plant leaves. Concretions, spherical or ovoid-shaped nodules found in some sedimentary strata, were once thought to be dinosaur eggs, and are often mistaken for fossils as well.
Gathering fossils dates at least to the beginning of recorded history. The fossils themselves are referred to as the fossil record. The fossil record was one of the early sources of data underlying the study of evolution and continues to be relevant to the history of life on Earth. Paleontologists examine the fossil record to understand the process of evolution and the way particular species have evolved.
Fossils have been visible and common throughout most of natural history, and so documented human interaction with them goes back as far as recorded history, or earlier.
There are many examples of paleolithic stone knives in Europe, with fossil echinoderms set precisely at the hand grip, going all the way back to Homo heidelbergensis and neanderthals.These ancient peoples also drilled holes through the center of those round fossil shells, apparently using them as beads for necklaces.
The ancient Egyptians gathered fossils of species that resembled the bones of modern species they worshipped. The god Set was associated with the hippopotamus, therefore fossilized bones of hippo-like species were kept in that deity's temples.Five-rayed fossil sea urchin shells were associated with the deity Sopdu, the Morning Star, equivalent of Venus in Roman mythology.
Fossils appear to have directly contributed to the mythology of many civilizations, including the ancient Greeks. Classical Greek historian Herodotos wrote of an area near Hyperborea where gryphons protected golden treasure. There was indeed gold mining in that approximate region, where beaked Protoceratops skulls were common as fossils.
A later Greek scholar, Aristotle, eventually realized that fossil seashells from rocks were similar to those found on the beach, indicating the fossils were once living animals. He had previously explained them in terms of vaporous exhalations,.Later Persian polymath Avicenna modified into the theory of petrifying fluids (succus lapidificatus). This was built upon in the 14th century by Albert of Saxony, and accepted in some form by most naturalists by the 16th century.
Roman naturalist Pliny the Elder wrote of "tongue stones", which he called glossopetra. These were fossil shark teeth, thought by some classical cultures to look like the tongues of people or snakes.He also wrote about the horns of Ammon, which are fossil ammonites, from whence the species ultimately draws its modern name. Pliny also makes one of the earlier known references to toadstones, thought until the 18th century to be a magical cure for poison originating in the heads of toads, but which are fossil teeth from Lepidotes , a Cretaceous ray-finned fish.
The Plains tribes of North America are thought to have similarly associated fossils, such as the many intact pterosaur fossils naturally exposed in the region, with their own mythology of the thunderbird.
There is no such direct mythological connection known from prehistoric Africa, but there is considerable evidence of tribes there excavating and moving fossils to ceremonial sites, apparently treating them with some reverence.
In Japan, fossil shark teeth were associated with the mythical tengu, thought to be the razor-sharp claws of the creature, documented some time after the 8th century AD.
In medieval China, the fossil bones of ancient mammals including Homo erectus were often mistaken for "dragon bones" and used as medicine and aphrodisiacs. In addition, some of these fossil bones are collected as "art" by scholars, who left scripts on various artifacts, indicating the time they were added to a collection. One good example is the famous scholar Huang Tingjian of the South Song Dynasty during the 11th century, who kept a specific seashell fossil with his own poem engraved on it.In the West fossilized sea creatures on mountainsides were seen as proof of the biblical deluge.
In 1027, the Persian Avicenna explained fossils' stoniness in The Book of Healing :
If what is said concerning the petrifaction of animals and plants is true, the cause of this (phenomenon) is a powerful mineralizing and petrifying virtue which arises in certain stony spots, or emanates suddenly from the earth during earthquake and subsidences, and petrifies whatever comes into contact with it. As a matter of fact, the petrifaction of the bodies of plants and animals is not more extraordinary than the transformation of waters.
From the 13th century to the present day, scholars pointed out that the fossil skulls of Deinotherium giganteum, found in Crete and Greece, might have been interpreted as being the skulls of the Cyclopes of Greek mythology, and are possibly the origin of that Greek myth.Their skulls appear to have a single eye-hole in the front, just like their modern elephant cousins, though in fact it's actually the opening for their trunk.
In Norse mythology, echinoderm shells (the round five-part button left over from a sea urchin) were associated with the god Thor, not only being incorporated in thunderstones, representations of Thor's hammer and subsequent hammer-shaped crosses as Christianity was adopted, but also kept in houses to garner Thor's protection.
These grew into the shepherd's crowns of English folklore, used for decoration and as good luck charms, placed by the doorway of homes and churches.In Suffolk, a different species was used as a good-luck charm by bakers, who referred to them as fairy loaves, associating them with the similarly shaped loaves of bread they baked.
More scientific views of fossils emerged during the Renaissance. Leonardo da Vinci concurred with Aristotle's view that fossils were the remains of ancient life.For example, da Vinci noticed discrepancies with the biblical flood narrative as an explanation for fossil origins:
If the Deluge had carried the shells for distances of three and four hundred miles from the sea it would have carried them mixed with various other natural objects all heaped up together; but even at such distances from the sea we see the oysters all together and also the shellfish and the cuttlefish and all the other shells which congregate together, found all together dead; and the solitary shells are found apart from one another as we see them every day on the sea-shores.
And we find oysters together in very large families, among which some may be seen with their shells still joined together, indicating that they were left there by the sea and that they were still living when the strait of Gibraltar was cut through. In the mountains of Parma and Piacenza multitudes of shells and corals with holes may be seen still sticking to the rocks...."
In 1666, Nicholas Steno examined a shark, and made the association of its teeth with the "tongue stones" of ancient Greco-Roman mythology, concluding that those were not in fact the tongues of venomous snakes, but the teeth of some long-extinct species of shark.
Robert Hooke (1635-1703) included micrographs of fossils in his Micrographia and was among the first to observe fossil forams. His observations on fossils, which he stated to be the petrified remains of creatures some of which no longer existed, were published posthumously in 1705.
William Smith (1769–1839), an English canal engineer, observed that rocks of different ages (based on the law of superposition) preserved different assemblages of fossils, and that these assemblages succeeded one another in a regular and determinable order. He observed that rocks from distant locations could be correlated based on the fossils they contained. He termed this the principle of faunal succession. This principle became one of Darwin's chief pieces of evidence that biological evolution was real.
Georges Cuvier came to believe that most if not all the animal fossils he examined were remains of extinct species. This led Cuvier to become an active proponent of the geological school of thought called catastrophism. Near the end of his 1796 paper on living and fossil elephants he said:
All of these facts, consistent among themselves, and not opposed by any report, seem to me to prove the existence of a world previous to ours, destroyed by some kind of catastrophe.
Interest in fossils, and geology more generally, expanded during the early nineteenth century. In Britain, Mary Anning's discoveries of fossils, including the first complete ichthyosaur and a complete plesiosaurus skeleton, sparked both public and scholarly interest.
Early naturalists well understood the similarities and differences of living species leading Linnaeus to develop a hierarchical classification system still in use today. Darwin and his contemporaries first linked the hierarchical structure of the tree of life with the then very sparse fossil record. Darwin eloquently described a process of descent with modification, or evolution, whereby organisms either adapt to natural and changing environmental pressures, or they perish.
When Darwin wrote On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life , the oldest animal fossils were those from the Cambrian Period, now known to be about 540 million years old. He worried about the absence of older fossils because of the implications on the validity of his theories, but he expressed hope that such fossils would be found, noting that: "only a small portion of the world is known with accuracy." Darwin also pondered the sudden appearance of many groups (i.e. phyla) in the oldest known Cambrian fossiliferous strata.
Since Darwin's time, the fossil record has been extended to between 2.3 and 3.5 billion years. Most of these Precambrian fossils are microscopic bacteria or microfossils. However, macroscopic fossils are now known from the late Proterozoic. The Ediacara biota (also called Vendian biota) dating from 575 million years ago collectively constitutes a richly diverse assembly of early multicellular eukaryotes.
The fossil record and faunal succession form the basis of the science of biostratigraphy or determining the age of rocks based on embedded fossils. For the first 150 years of geology, biostratigraphy and superposition were the only means for determining the relative age of rocks. The geologic time scale was developed based on the relative ages of rock strata as determined by the early paleontologists and stratigraphers.
Since the early years of the twentieth century, absolute dating methods, such as radiometric dating (including potassium/argon, argon/argon, uranium series, and, for very recent fossils, radiocarbon dating) have been used to verify the relative ages obtained by fossils and to provide absolute ages for many fossils. Radiometric dating has shown that the earliest known stromatolites are over 3.4 billion years old.
Paleontology has joined with evolutionary biology to share the interdisciplinary task of outlining the tree of life, which inevitably leads backwards in time to Precambrian microscopic life when cell structure and functions evolved. Earth's deep time in the Proterozoic and deeper still in the Archean is only "recounted by microscopic fossils and subtle chemical signals."Molecular biologists, using phylogenetics, can compare protein amino acid or nucleotide sequence homology (i.e., similarity) to evaluate taxonomy and evolutionary distances among organisms, with limited statistical confidence. The study of fossils, on the other hand, can more specifically pinpoint when and in what organism a mutation first appeared. Phylogenetics and paleontology work together in the clarification of science's still dim view of the appearance of life and its evolution.
Niles Eldredge's study of the Phacops trilobite genus supported the hypothesis that modifications to the arrangement of the trilobite's eye lenses proceeded by fits and starts over millions of years during the Devonian.Eldredge's interpretation of the Phacops fossil record was that the aftermaths of the lens changes, but not the rapidly occurring evolutionary process, were fossilized. This and other data led Stephen Jay Gould and Niles Eldredge to publish their seminal paper on punctuated equilibrium in 1971.
Synchrotron X-ray tomographic analysis of early Cambrian bilaterian embryonic microfossils yielded new insights of metazoan evolution at its earliest stages. The tomography technique provides previously unattainable three-dimensional resolution at the limits of fossilization. Fossils of two enigmatic bilaterians, the worm-like Markuelia and a putative, primitive protostome, Pseudooides , provide a peek at germ layer embryonic development. These 543-million-year-old embryos support the emergence of some aspects of arthropod development earlier than previously thought in the late Proterozoic. The preserved embryos from China and Siberia underwent rapid diagenetic phosphatization resulting in exquisite preservation, including cell structures. This research is a notable example of how knowledge encoded by the fossil record continues to contribute otherwise unattainable information on the emergence and development of life on Earth. For example, the research suggests Markuelia has closest affinity to priapulid worms, and is adjacent to the evolutionary branching of Priapulida, Nematoda and Arthropoda.
Fossil trading is the practice of buying and selling fossils. This is many times done illegally with artifacts stolen from research sites, costing many important scientific specimens each year.The problem is quite pronounced in China, where many specimens have been stolen.
Fossil collecting (sometimes, in a non-scientific sense, fossil hunting) is the collection of fossils for scientific study, hobby, or profit. Fossil collecting, as practiced by amateurs, is the predecessor of modern paleontology and many still collect fossils and study fossils as amateurs. Professionals and amateurs alike collect fossils for their scientific value.
These is some medicinal and preventive use for some fossils. Largely the use of fossils as medicine is a matter of placebo effect. However, the consumption of certain fossils has been proven to help against stomach acidity and mineral depletion. The use of fossils to address health issues is rooted in traditional medicine and include the use of fossils as talismans. The specific fossil to use to alleviate or cure an illness is often based on its resemblance of the fossils and the symptoms or affected organ.
Paleontology, sometimes spelled palaeontology, is the scientific study of life that existed prior to, and sometimes including, the start of the Holocene Epoch. It includes the study of fossils to determine organisms' evolution and interactions with each other and their environments. Paleontological observations have been documented as far back as the 5th century BC. The science became established in the 18th century as a result of Georges Cuvier's work on comparative anatomy, and developed rapidly in the 19th century. The term itself originates from Greek παλαιός, palaios, "old, ancient", ὄν, on, "being, creature" and λόγος, logos, "speech, thought, study".
Stromatolites or stromatoliths are layered mounds, columns, and sheet-like sedimentary rocks that were originally formed by the growth of layer upon layer of cyanobacteria, a single-celled photosynthesizing microbe. Fossilized stromatolites provide records of ancient life on Earth. Lichen stromatolites are a proposed mechanism of formation of some kinds of layered rock structure that are formed above water, where rock meets air, by repeated colonization of the rock by endolithic lichens.
The Maotianshan Shales are a series of Early Cambrian deposits in the Chiungchussu Formation, famous for their Konservat Lagerstätten, deposits known for the exceptional preservation of fossilized organisms or traces. The Maotianshan Shales form one of some forty Cambrian fossil locations worldwide exhibiting exquisite preservation of rarely preserved, non-mineralized soft tissue, comparable to the fossils of the Burgess Shale. They take their name from Maotianshan Hill in Chengjiang County, Yunnan Province, China.
An exoskeleton is the external skeleton that supports and protects an animal's body, in contrast to the internal skeleton (endoskeleton) of, for example, a human. In usage, some of the larger kinds of exoskeletons are known as "shells". Examples of animals with exoskeletons include insects such as grasshoppers and cockroaches, and crustaceans such as crabs and lobsters, as well as the shells of certain sponges and the various groups of shelled molluscs, including those of snails, clams, tusk shells, chitons and nautilus. Some animals, such as the tortoise, have both an endoskeleton and an exoskeleton.
Taphonomy is the study of how organisms decay and become fossilized. The term taphonomy was introduced to paleontology in 1949 by Soviet scientist Ivan Efremov to describe the study of the transition of remains, parts, or products of organisms from the biosphere to the lithosphere.
Timeline of paleontology
A Lagerstätte is a sedimentary deposit that exhibits extraordinary fossils with exceptional preservation—sometimes including preserved soft tissues. These formations may have resulted from carcass burial in an anoxic environment with minimal bacteria, thus delaying the decomposition of both gross and fine biological features until long after a durable impression was created in the surrounding matrix. Lagerstätten span geological time from the Neoproterozoic era to the present. Worldwide, some of the best examples of near-perfect fossilization are the Cambrian Maotianshan shales and Burgess Shale, the Devonian Hunsrück Slates and Gogo Formation, the Carboniferous Mazon Creek, the Jurassic Solnhofen limestone, the Cretaceous Santana, Yixian and Tanis formations, the Eocene Green River Formation, and the Miocene Foulden Maar.
A trace fossil, also ichnofossil, is a geological record of biological activity. Ichnology is the study of such traces, and is the work of ichnologists. Trace fossils may consist of impressions made on or in the substrate by an organism: for example, burrows, borings (bioerosion), urolites, footprints and feeding marks, and root cavities. The term in its broadest sense also includes the remains of other organic material produced by an organism—for example coprolites or chemical markers—or sedimentological structures produced by biological means—for example, stromatolites. Trace fossils contrast with body fossils, which are the fossilized remains of parts of organisms' bodies, usually altered by later chemical activity or mineralization.
Phosphatic fossilization has occurred in unusual circumstances to preserve some extremely high-resolution microfossils in which careful preparation can even reveal preserved cellular structures. Such microscopic fossils are only visible under the scanning electron microscope.
In geology, petrifaction or petrification is the process by which organic material becomes a fossil through the replacement of the original material and the filling of the original pore spaces with minerals. Petrified wood typifies this process, but all organisms, from bacteria to vertebrates, can become petrified. Petrifaction takes place through a combination of two similar processes: permineralization and replacement. These processes create replicas of the original specimen that are similar down to the microscopic level.
Geobiology is a field of scientific research that explores the interactions between the physical Earth and the biosphere. It is a relatively young field, and its borders are fluid. There is considerable overlap with the fields of ecology, evolutionary biology, microbiology, paleontology, and particularly soil science and biogeochemistry. Geobiology applies the principles and methods of biology, geology, and soil science to the study of the ancient history of the co-evolution of life and Earth as well as the role of life in the modern world. Geobiologic studies tend to be focused on microorganisms, and on the role that life plays in altering the chemical and physical environment of the pedosphere, which exists at the intersection of the lithosphere, atmosphere, hydrosphere and/or cryosphere. It differs from biogeochemistry in that the focus is on processes and organisms over space and time rather than on global chemical cycles.
Derek Ernest Gilmor Briggs is an Irish palaeontologist and taphonomist based at Yale University. Briggs is one of three palaeontologists, along with Harry Blackmore Whittington and Simon Conway Morris, who were key in the reinterpretation of the fossils of the Burgess Shale. He is the Yale University G. Evelyn Hutchinson Professor of Geology and Geophysics, Curator of Invertebrate Paleontology at Yale's Peabody Museum of Natural History, and former Director of the Peabody Museum.
The history of paleontology traces the history of the effort to understand the history of life on Earth by studying the fossil record left behind by living organisms. Since it is concerned with understanding living organisms of the past, paleontology can be considered to be a field of biology, but its historical development has been closely tied to geology and the effort to understand the history of Earth itself.
The Ediacaranbiota consisted of enigmatic tubular and frond-shaped, mostly sessile organisms that lived during the Ediacaran Period. Trace fossils of these organisms have been found worldwide, and represent the earliest known complex multicellular organisms. The Ediacaran biota may have radiated in a proposed event called the Avalon explosion,, after the Earth had thawed from the Cryogenian period's extensive glaciation. The biota largely disappeared with the rapid increase in biodiversity known as the Cambrian explosion. Most of the currently existing body plans of animals first appeared in the fossil record of the Cambrian rather than the Ediacaran. For macroorganisms, the Cambrian biota appears to have completely replaced the organisms that dominated the Ediacaran fossil record, although relationships are still a matter of debate.
The Burgess Shale of British Columbia is famous for its exceptional preservation of mid-Cambrian organisms. Around 40 other sites have been discovered of a similar age, with soft tissues preserved in a similar, though not identical, fashion. Additional sites with a similar form of preservation are known from the Ediacaran and Ordovician periods.
The "Cambrian substrate revolution" or "Agronomic revolution", evidenced in trace fossils, is the diversification of animal burrowing during the early Cambrian period.
The small shelly fauna, small shelly fossils (SSF), or early skeletal fossils (ESF) are mineralized fossils, many only a few millimetres long, with a nearly continuous record from the latest stages of the Ediacaran to the end of the Early Cambrian Period. They are very diverse, and there is no formal definition of "small shelly fauna" or "small shelly fossils". Almost all are from earlier rocks than more familiar fossils such as trilobites. Since most SSFs were preserved by being covered quickly with phosphate and this method of preservation is mainly limited to the Late Ediacaran and Early Cambrian periods, the animals that made them may actually have arisen earlier and persisted after this time span.
The Cambrian explosion or Cambrian radiation was an event approximatelyin the Cambrian period when most major animal phyla appeared in the fossil record. It lasted for about 13 – 25 million years and resulted in the divergence of most modern metazoan phyla. The event was accompanied by major diversification of other organisms.
Permineralization is a process of fossilization in which mineral deposits form internal casts of organisms. Carried by water, these minerals fill the spaces within organic tissue. Because of the nature of the casts, permineralization is particularly useful in studies of the internal structures of organisms, usually of plants.
The fossils of the Burgess Shale, like the Burgess Shale itself, formed around 505 million years ago in the Mid Cambrian period. They were discovered in Canada in 1886, and Charles Doolittle Walcott collected over 60,000 specimens in a series of field trips up from 1909 to 1924. After a period of neglect from the 1930s to the early 1960s, new excavations and re-examinations of Walcott's collection continue to discover new species, and statistical analysis suggests discoveries will continue for the foreseeable future. Stephen Jay Gould's book Wonderful Life describes the history of discovery up to the early 1980s, although his analysis of the implications for evolution has been contested.
|The Wikibook Historical Geology has a page on the topic of: Fossils|
|The Wikibook Historical Geology has a page on the topic of: Fossils and absolute dating|
|Wikiquote has quotations related to: Fossil|
|Look up fossil in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to fossils .|
|
Nuclear physics Radioactive decay
Nucleosynthesis is the process of creating new atomic nuclei from pre-existing nucleons (protons and neutrons). It is thought that the primordial nucleons themselves were formed from the quark–gluon plasma from the Big Bang as it cooled below two trillion degrees. A few minutes afterward, starting with only protons and neutrons, nuclei up to lithium and beryllium (both with mass number 7) were formed, but only in relatively small amounts. Some boron may have been formed at this time, but the process stopped before significant carbon could be formed, because this element requires a far higher product of helium density and time than were present in the short nucleosynthesis period of the Big Bang. The Big Bang fusion process essentially shut down due to drops in temperature and density as the universe continued to expand. This first process of primordial nucleosynthesis was the first type of nucleogenesis to occur in the universe.
The subsequent nucleosynthesis of the heavier elements required heavy stars and supernova explosions. This theoretically happened as hydrogen and helium from the Big Bang condensed into the first stars 500 million years after the Big Bang. The primordial elements still present on Earth that were once created in stellar nucleosynthesis range in atomic numbers from 6 (carbon) to 94 (plutonium). Synthesis of these heavier elements occurs either by nuclear fusion (including both rapid and slow multiple neutron capture) or by nuclear fission, sometimes followed by beta decay.
By contrast, many stellar processes actually tend to destroy deuterium and isotopes of beryllium, lithium, and boron which have collected in stars after their primordial formation in the Big Bang. This effective destruction happens via the transmutation of these elements to higher atomic species. Quantities of these lighter elements in the present universe are therefore thought to have been formed mainly through billions of years of cosmic ray (mostly high-energy proton) mediated breakup of heavier elements residing in interstellar gas and dust.
In addition to the major processes of priordial nucleosynthesis in the Big Bang, stellar processes, and cosmic-ray nucleosynthesis in space, many minor natural processes continue to produce small amounts of new elements on Earth. These nuclides are naturally produced on a continuing basis via the decay of long-lived primordial radionuclides (via radiogenesis), from natural nuclear reactions in cosmic ray bombardment of elements on Earth (cosmogenic nuclides), and from other natural nuclear reactions powered by particles from radioactive decay, (producing nucleogenic nuclides).
The first ideas on nucleosynthesis were simply that the chemical elements were created at the beginnings of the universe, but no successful physical scenario for this could be identified. Hydrogen and helium were clearly far more abundant than any of the other elements (all the rest of which constituted less than 2% of the mass of the solar system, and presumably other star systems as well). At the same time it was clear that carbon was the next most common element, and also that there was a general trend toward abundance of light elements, especially those composed of whole numbers of helium-4 nuclei.
Arthur Stanley Eddington first suggested in 1920 that stars obtain their energy by fusing hydrogen to helium, but this idea was not generally accepted because it lacked nuclear mechanisms. In the years immediately before World War II Hans Bethe first provided those nuclear mechanisms by which hydrogen is fused into helium. However, neither of these early works on stellar power addressed the origin of the elements heavier than helium.
Fred Hoyle's original work on nucleosynthesis of heavier elements in stars occurred just after World War II. This work attributed production of all heavier elements formed in stars during the nuclear evolution of their compositions, starting from hydrogen. Hoyle proposed that hydrogen is continuously created in the universe from vacuum and energy, without need for universal beginning.
Hoyle's work explained how the abundances of the elements increased with time as the galaxy aged. Subsequently, Hoyle's picture was expanded during the 1960s by creative contributions by William A. Fowler, Alastair G. W. Cameron, and Donald D. Clayton, and then by many others. The creative 1957 review paper by E. M. Burbidge, G. R. Burbidge, Fowler and Hoyle (see Ref. list) is a well-known summary of the state of the field in 1957. That paper defined new processes for changing one heavy nucleus into others within individual stars, processes that could be documented by astronomers.
The Big Bang itself had been proposed in 1931, long before this period, by Georges Lemaître, a Belgian physicist and Roman Catholic priest, who suggested that the evident expansion of the Universe in forward time required that the Universe contracted backwards in time, and would continue to do so until it could contract no further, bringing all the mass of the Universe into a single point, a "primeval atom", at a point in time before which time and space did not exist. Hoyle later gave Lemaître's model the derisive term of Big Bang, not realizing that Lemaître's model was needed to explain the existence of deuterium and nuclides between helium and carbon, as well as the fundamentally high amount of helium present not only in stars, but also in interstellar gas. As it happened, both Lemaître and Hoyle's models of nucleosynthesis would be needed to explain elemental abundance in the universe.
In modern theory, there are a number of astrophysical processes which are believed to be responsible for nucleosynthesis in the universe. The majority of these occur within the hot matter inside stars. The successive nuclear fusion processes which occur inside stars are known as hydrogen burning (via the proton-proton chain or the CNO cycle), helium burning, carbon burning, neon burning, oxygen burning and silicon burning. These processes are able to create elements up to iron and nickel, the region of the isotopes having the highest binding energy per nucleon. Heavier elements can be assembled within stars by a neutron capture process known as the s process or in explosive environments, such as supernovae, by a number of processes. Some of the more important of these include the r process, which involves rapid neutron captures, the rp process, which involves rapid proton captures, and the p process (sometimes known as the gamma process), which involves photodisintegration of existing nuclei.
The major types of nucleosynthesis
Big Bang nucleosynthesis
Big Bang nucleosynthesis occurred within the first three minutes of the beginning of the universe and is responsible for much of the abundance of 1H (protium), 2H (D, deuterium), 3He (helium-3), and 4He (helium-4), in the universe. Although 4He continues to be produced by other mechanisms (such as stellar fusion and alpha decay) and trace amounts of 1H continue to be produced by spallation and certain types of radioactive decay (proton emission and neutron emission), most of the mass of these isotopes in the universe, and all but the insignificant traces of the 3He and deuterium in the universe produced by rare processes such as cluster decay, are thought to have been produced in the Big Bang. The nuclei of these elements, along with some 7Li, and 7Be are believed to have been formed when the universe was between 100 and 300 seconds old, after the primordial quark-gluon plasma froze out to form protons and neutrons. Because of the very short period in which Big Bang nucleosynthesis occurred before being stopped by expansion and cooling (about 20 minutes after the Big Bang), no elements heavier than beryllium (or possibly boron) could be formed. (Elements formed during this time were in the plasma state, and did not cool to the state of neutral atoms until much later).
Stellar nucleosynthesis occurs in stars during the process of stellar evolution. It is responsible for the generation of elements from carbon to iron by nuclear fusion processes. Stars are the nuclear furnaces in which H and He are fused into heavier nuclei, a process which occurs by proton-proton chain in stars cooler than the Sun, and by the CNO cycle in stars more massive than the Sun.
Of particular importance is carbon, because its formation from He is a bottleneck in the entire process. Carbon is produced by the triple-alpha process in all stars. Carbon is also the main element used in the production of free neutrons within the stars, giving rise to the s process which involves the slow absorption of neutrons to produce elements heavier than iron and nickel (57Fe and 62Ni). Carbon and other elements formed by this process are also fundamental to life in the form that we know it.
The products of stellar nucleosynthesis are generally distributed into the universe through mass loss episodes and stellar winds in stars which are of low mass, as in the planetary nebulae phase of evolution, as well as through explosive events resulting in supernovae in the case of massive stars.
The first direct proof that nucleosynthesis occurs in stars was the detection of technetium in the atmosphere of a red giant in the early 1950s, prototypical for the class of Tc-rich stars. Because technetium is radioactive, with half life much less than the age of the star, its abundance must reflect its creation within that star during its lifetime. Less dramatic, but equally convincing evidence is of large overabundances of specific stable elements in a stellar atmosphere. A historically important case was observation of barium abundances some 20-50 times greater than in unevolved stars, which is evidence of the operation of the s process within that star. Many modern proofs appear in the isotopic composition of stardust, solid grains that condensed from the gases of individual stars and which have been extracted from meteorites. Stardust is one component of cosmic dust. The measured isotopic compositions demonstrate many aspects of nucleosynthesis within the stars from which the stardust grains condensed.
This includes supernova nucleosynthesis, and produces the elements heavier than iron by an intense burst of nuclear reactions that typically last mere seconds during the explosion of the supernova core. In explosive environments of supernovae, the elements between silicon and nickel are synthesized by fast fusion. Also in supernovae further nucleosynthesis processes can occur, such as the r process, in which the most neutron-rich isotopes of elements heavier than nickel are produced by rapid absorption of free neutrons released during the explosions. It is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
Explosive nucleosynthesis occurs too rapidly for radioactive decay to decrease the number of neutrons, so that many abundant isotopes having equal even numbers of protons and neutrons are synthesized by the alpha process to produce nuclides which consist of whole numbers of helium nuclei, up to 16 (representing 64Ge). Such nuclides are stable up to 40Ca (made of 10 helium nuclei), but heavier nuclei with equal numbers of protons and neutrons are radioactive. However, the alpha process continues to influence production of isobars of these nuclides, including at least the radioactive nuclides 44Ti, 48Cr, 52Fe, 56Ni, 60Zn, and 64Ge, most of which (save 44Ti and 60Zn) are created in such abundance as to decay after the explosion to create the most abundant stable isotope of the corresponding element at each atomic weight. Thus, the corresponding most common (abundant) isotopes of elements produced in this way are 48Ti, 52Cr, 56Fe, and 64Zn. Many such decays are accompanied by emission of gamma-ray lines capable of identifying the isotope that has just been created in the explosion.
The most convincing proof of explosive nucleosynthesis in supernovae occurred in 1987 when gamma-ray lines were detected emerging from supernova 1987A. Gamma ray lines identifying 56Co and 57Co, whose radioactive halflives limit their age to about a year, proved that 56Fe and 57Fe were created by radioactive parents. This nuclear astronomy was predicted in 1969 as a way to confirm explosive nucleosynthesis of the elements, and that prediction played an important role in the planning for NASA's successful Compton Gamma-Ray Observatory.
Other proofs of explosive nucleosynthesis are found within the stardust grains that condensed within the interiors of supernovae as they expanded and cooled. Stardust grains are one component of cosmic dust. In particular, radioactive 44Ti was measured to be very abundant within supernova stardust grains at the time they condensed during the supernova expansion, confirming a 1975 prediction for identifying supernova stardust. Other unusual isotopic ratios within these grains reveal many specific aspects of explosive nucleosynthesis.
Cosmic ray spallation
Cosmic ray spallation produces some of the lightest elements present in the universe (though not significant deuterium). Most notably spallation is believed to be responsible for the generation of almost all of 3He and the elements lithium, beryllium and boron (some 7
Li and 7
Be are thought to have been produced in the Big Bang). The spallation process results from the impact of cosmic rays (mostly fast protons) against the interstellar medium. These impacts fragment carbon, nitrogen and oxygen nuclei present in the cosmic rays, and also these elements being struck by protons in cosmic rays. The process results in these light elements (Be, B, and Li) being present in cosmic rays at much higher proportion than they are represented in solar atmospheres, whereas H and He nuclei are represented in cosmic rays with approximately primordial abundance with regard to each other.
Beryllium and boron are not significantly produced in stellar fusion processes, because the instability of any 8Be formed from two 4He nuclei prevents simple 2-particle reaction building-up of these elements.
Theories of nucleosynthesis are tested by calculating isotope abundances and comparing with observed results. Isotope abundances are typically calculated by calculating the transition rates between isotopes in a network. Often these calculations can be simplified as a few key reactions control the rate of other reactions.
Minor mechanisms and processes
Amounts of certain nuclides are produced on Earth by artificial means, and this is their major source (for example, technetium). However, some nuclides are also by a number of natural means that have continued after primordial production of elements, discussed above, ceased. Often these act to produce new elements in ways that can be used to date rocks or check on the timing or source of geological processes. Although these processes are usually not major sources of nuclides, in the cases of the short-lived naturally-occurring nuclides that exhibit half-lives too short to be primordial (see list of nuclides), these processes are the entire source of the existing natural supply of the nuclide.
These mechanisms include:
- Radioactive decay leading to specific radiogenic daughter nuclides. The nuclear decay of many long-lived primordial isotopes, especially uranium-235, uranium-238, and thorium-232 produce many intermediate daughter nuclides, some of them quite short-lived, before finally decaying to isotopes of lead. The Earth's natural supply of elements like radon and polonium is via this mechanism. The atmosphere's supply of argon-40 is due mostly to the radioactive decay of potassium-40 in the time since the formation of the Earth, so most of this atmospheric argon is not primordial. In the case of alpha-decay, helium-4 is produced directly by alpha-decay, and so the helium trapped in Earth's crust is also mostly non-primordial. In other types of radioactive decay, such as cluster decay, other types of nuclei are ejected (for example, neon-20), and these eventually become newly-formed neutral atoms.
- Radioactive decay leading to spontaneous fission. This is not cluster decay, for the fission products may be split among nearly any type of atom. Uranium-235 and uranium-238 are both primordial isotopes that undergo spontaneous fission. Natural technetium and promethium are produced in this way.
- Nuclear reactions. Naturally-occurring nuclear reactions powered by radioactive decay give rise to so-called nucleogenic nuclides. This process happens when an energetic particle from a radioactive decay, often an alpha particle, reacts with a nucleus of another atom to change the nucleus into another nuclide. This process may also cause production of further subatomic particles, such as neutrons. Neutrons can also be produced in spontaneous fission and by neutron emission (a type of radioactive decay). These neutrons can then go on to produce other nuclides via neutron-induced fission, or by neutron capture. For example, some stable isotopes like neon-21 and neon-22 are produced in several routes of nucleogenic synthesis, and thus only part of their abundance is primordial.
- Nuclear reactions due to cosmic rays. By convention, these reaction-products are not termed "nucleogenic" nuclides, but rather cosmogenic nuclides. Cosmic rays continue to produce new elements on Earth by the same cosmogenic processes discussed above that produced primordial beryllium and boron. An important example is carbon-14, produced from nitrogen-14 in the atmosphere by cosmic rays. See also iodine-129 for another example.
- ^ Autobiography William A. Fowler
- ^ S. Paul W. Merrill (1952). "Spectroscopic Observations of Stars of Class S". The Astrophysical Journal 116: 21. Bibcode 1952ApJ...116...21M. doi:10.1086/145589.
- ^ D. D. Clayton and L. R. Nittler (2004). "Astrophysics with Presolar Stardust". Annual Review of Astronomy and Astrophysics 42 (1): 39–78. Bibcode 2004ARA&A..42...39C. doi:10.1146/annurev.astro.42.053102.134022.
- ^ D. D. Clayton, S.A. Colgate, G.J. Fishman (1969). "Gamma ray lines from young supernova remnants". The Astrophysical Journal 155: 75–82. Bibcode 1969ApJ...155...75C. doi:10.1086/149849.
- ^ D. D. Clayton, L. R.Nittler (2004). "Astrophysics with Presolar stardust". Annual Reviews of Astronomy and Astrophysics 42 (1): 39–78. Bibcode 2004ARA&A..42...39C. doi:10.1146/annurev.astro.42.053102.134022.
- E. M. Burbidge, G. R. Burbidge, W. A. Fowler, F. Hoyle, Synthesis of the Elements in Stars, Rev. Mod. Phys. 29 (1957) 547 (article at the Physical Review Online Archive (subscription required)).
- F. Hoyle, Monthly Notices Roy. Astron. Soc. 106, 366 (1946)
- F. Hoyle, Astrophys. J. Suppl. 1, 121 (1954)
- D. D. Clayton, "Principles of Stellar Evolution and Nucleosynthesis", McGraw-Hill, 1968; University of Chicago Press, 1983, ISBN 0-226-10952-6
- C. E. Rolfs, W. S. Rodney, Cauldrons in the Cosmos, Univ. of Chicago Press, 1988, ISBN 0-226-72457-3.
- D. D. Clayton, "Handbook of Isotopes in the Cosmos", Cambridge University Press, 2003, ISBN 0 521 823811.
- C. Iliadis, "Nuclear Physics of Stars", Wiley-VCH, 2007, ISBN 978 3 527 40602 9
Nuclear processes Radioactive decay Stellar nucleosynthesis Other processesEmissionCapture
Wikimedia Foundation. 2010.
|
- Chemistry Concept Questions and Answers
Balanced Chemical Equations Questions
Chemical equations are symbolic representations of chemical reactions that express the reactants and products in terms of their respective chemical formulae. They also use symbols to represent factors such as reaction direction and the physical states of the reacting entities.
Balanced Chemical Equations Chemistry Questions with Solutions
Q1. A balanced chemical equation is in accordance with-
- Multiple proportion
- Reciprocal proportion
- Conservation of mass
- Definite proportions
Correct Answer: (c) Law of Conservation of Mass
Q2. The correct balanced equation for the reaction __C 2 H 6 O + __O 2 → __CO 2 + __H 2 O is-
- 2C 2 H 6 O + O 2 → CO 2 + H 2 O
- C 2 H 6 O + 3O 2 → 2CO 2 + 3H 2 O
- C 2 H 6 O + 2O 2 → 3CO 2 + 3H 2 O
- 2C 2 H 6 O + O 2 → 2CO 2 + H 2 O
Correct Answer: (b) C 2 H 6 O + 3O 2 → 2CO 2 + 3H 2 O
Q3. The correct balanced equation for the reaction __KNO 3 + __H 2 CO 3 → __K 2 CO 3 + __HNO 3 is-
- 2KNO 3 + H 2 CO 3 → K 2 CO 3 + 2HNO 3
- 2KNO 3 + 2H 2 CO 3 → K 2 CO 3 + 2HNO 3
- KNO 3 + H 2 CO 3 → K 2 CO 3 + 2HNO 3
- 2KNO 3 + 2H 2 CO 3 → K 2 CO 3 + 3HNO 3
Correct Answer: (a) 2KNO 3 + H 2 CO 3 → K 2 CO 3 + 2HNO 3
Q4. The correct balanced equation for the reaction __CaCl 2 + __Na 3 PO 4 → __ Ca 3 (PO 4 ) 2 + __ NaCl is-
- 2CaCl 2 + 2Na 3 PO 4 → 2Ca 3 (PO 4 ) 2 + NaCl
- CaCl 2 + Na 3 PO 4 → Ca 3 (PO 4 ) 2 + NaCl
- 3CaCl 2 + 2Na 3 PO 4 → Ca 3 (PO 4 ) 2 + 6NaCl
- 3CaCl 2 + 2Na 3 PO 4 → Ca 3 (PO 4 ) 2 + 3NaCl
Correct Answer- (c) 3CaCl 2 + 2Na 3 PO 4 → Ca3(PO 4 ) 2 + 6NaCl
Q5. The correct balanced equation for the reaction __TiCl 4 + __H 2 O → __TiO 2 + __HCl is-
- TiCl 4 + 2H 2 O → TiO 2 +2HCl
- TiCl 4 + 2H 2 O → TiO 2 + 4HCl
- 2TiCl 4 + H 2 O → 2TiO 2 + HCl
- TiCl 4 + 4H 2 O → TiO 2 + 4HCl
Correct Answer- (b) TiCl 4 + 2H 2 O → TiO 2 + 4HCl
Q6. Write a balanced equation for the reaction of molecular nitrogen (N 2 ) and oxygen (O 2 ) to form dinitrogen pentoxide.
Answer. The equation for the reaction is-
N 2 + O 2 → N 2 O 5 (unbalanced equation)
The balanced chemical equation is-
2N 2 + 5O 2 → 2N 2 O 5
Q7. On what basis is a chemical equation balanced?
Answer. A chemical equation is balanced using the law of conservation of mass, which states that “matter cannot be created nor destroyed.”
Q8. What is the balanced equation for the reaction of photosynthesis?
Answer. The balanced chemical equation for the reaction of photosynthesis is-
6CO 2 + 6H 2 O → C 6 H 12 O 6 + 6O 2 .
Q9. We must solve a skeletal chemical equation.” Give a reason to justify the statement.
Answer. Skeletal chemical equations are unbalanced. Due to the law of conservation of mass, we must balance the chemical equation. It states that ‘matter cannot be created or destroyed.’ As a result, each chemical reaction must have a balanced chemical equation.
Q10. What does it mean to say an equation is balanced? Why is it important for an equation to be balanced?
Answer. The chemical equation must be balanced in order to obey the law of conservation of mass. A chemical equation is said to be balanced when the number of different atoms of elements in the reactants side equals the number of atoms in the products side. Balancing chemical equations is a trial-and-error process.
Q11. What is meant by the skeletal type chemical equation? What does it represent? Using the equation for electrolytic decomposition of water, differentiate between a skeletal chemical equation and a balanced chemical equation.
Answer. Skeletal equations are those in which formulas are used to indicate the chemicals involved in a chemical reaction.
The law of conservation of mass does not apply to skeletal equations.
The chemical formulas are represented by balanced chemical equations, which follow the law of conservation of mass, which states that the atoms on the reactant and product sides are the same.
H 2 O → H 2 + O 2 : Skeletal equation
2H 2 O → 2H 2 + O 2 : Balanced chemical equation
Q12. Write the balanced chemical equation for the following reaction:
- Phosphorus burns in presence of chlorine to form phosphorus pentachloride.
- Burning of natural gas.
- The process of respiration.
- P 4 + 10Cl 2 → 4PCl 5
- CH 4 +2O 2 → CO 2 +2H 2 O + heat energy
- C 6 H 12 O 6 + 6O 2 + 6H 2 O → 6CO 2 + 12H 2 O + energy
Q13. What Is the Distinction Between a Balanced Equation and a Skeleton Equation?
Answer. The primary distinction between a balanced equation and a skeleton equation is that the balanced equation provides the actual number of molecules of each reactant and product involved in the chemical reaction, whereas a skeleton equation only provides the reactants. Furthermore, a balanced equation may or may not contain stoichiometric coefficients, whereas a skeleton equation does not.
Q14. Balance the equations
- HNO +Ca(OH) 2 → Ca(NO 3 ) 2 + H 2 O
- NaCl + AgNO 3 → AgCl + NaNO 3
- BaCl 2 +H 2 SO 4 → BaSO 4 +HCl
Answer. The balanced chemical equation for the reactions are as follows-
- 2HNO 3 + Ca(OH) 2 → Ca(NO 3 ) 2 + 2H 2 O
- BaCl 2 +H 2 SO 4 → BaSO 4 + 2HCl
Q15. Write a balanced molecular equation describing each of the following chemical reactions.
- Solid calcium carbonate is heated and decomposes to solid calcium oxide and carbon dioxide gas.
- Gaseous butane, C 4 H 10 , reacts with diatomic oxygen gas to yield gaseous carbon dioxide and water vapour.
- Aqueous solutions of magnesium chloride and sodium hydroxide react to produce solid magnesium hydroxide and aqueous sodium chloride.
- Water vapour reacts with sodium metal to produce solid sodium hydroxide and hydrogen gas.
- CaCO 3 → CaO + CO 2
On heating, 1 mol of solid calcium carbonate yields 1 mol of calcium oxide and 1 mol of carbon dioxide gas.
- 2C 4 H 10 +13O 2 → 8CO 2 + 10H 2 O When 2 moles of butane gas react with 13 moles of diatomic oxygen gas, 8 moles of carbon dioxide gas and 10 moles of water vapours are produced.
- MgCl 2 + 2NaOH → 2NaCl + Mg(OH) 2
1 mol magnesium Cordelia reacts with two moles of sodium hydroxide to produce two moles of aqueous sodium chloride solution and one mole of solid magnesium hydroxide.
- 2H 2 O + 2Na → 2NaOH + H 2
2 moles of water vapour react with 2 moles of sodium metal, yielding 2 moles of solid sodium hydroxide and 1 mol of hydrogen gas.
Practise Questions on Balanced Chemical Equations
Balance the following equations-
1. (NH 4 ) 2 Cr 2 O 7 (s) → Cr 2 O 3 (s) + N 2 (g) + H 2 O(g)
2. Ca(OH) 2 + H 3 PO 4 → Ca 3 (PO 4 ) 2 + H 2 O
3. FeCl 3 + NH 4 OH → Fe(OH) 3 + NH 4 Cl
4. Al 2 (CO 3 ) 3 + H 3 PO 4 → AlPO 4 + CO 2 + H 2 O
5. S 8 + F 2 → SF 6
Click the PDF to check the answers for Practice Questions. Download PDF
Balancing a chemical reaction.
What is a Balanced Chemical Equation?
Leave a Comment Cancel reply
Your Mobile number and Email id will not be published. Required fields are marked *
Request OTP on Voice Call
Post My Comment
- Share Share
Register with BYJU'S & Download Free PDFs
Register with byju's & watch live videos.
How to Balance Equations - Printable Worksheets
Balancing Equations Worksheets
- Chemical Laws
- Periodic Table
- Projects & Experiments
- Scientific Method
- Physical Chemistry
- Medical Chemistry
- Chemistry In Everyday Life
- Famous Chemists
- Activities for Kids
- Abbreviations & Acronyms
- Weather & Climate
- Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
- B.A., Physics and Mathematics, Hastings College
A balanced chemical equation gives the number and type of atoms participating in a reaction, the reactants, products, and direction of the reaction. Balancing an unbalanced equation is mostly a matter of making certain mass and charge are balanced on the reactants and products side of the reaction arrow. This is a collection of printable worksheets to practice balancing equations. The printable worksheets are provided in pdf format with separate answer keys.
Balancing Chemical Equations - Worksheet #1 Balancing Chemical Equations - Answers #1 Balancing Chemical Equations - Worksheet #2 Balancing Chemical Equations - Answers #2 Balancing Chemical Equations - Worksheet #3 Balancing Chemical Equations - Answers #3 Balancing Equations - Worksheet #4 Balancing Equations - Answer Key #4
I also offer printable worksheets for balancing equations on my personal site. The printables are also available as PDF files:
Balancing Equation Practice Sheet [ answer sheet ] Another Equation Worksheet [ answer sheet ] Yet Another Printable Worksheet [ answer key ]
You may also wish to review the step-by-step tutorial on how to balance a chemical equation .
Online Practice Quizzes
Another way to practice balancing equations is by taking a quiz.
Coefficients in Balanced Equations Quiz Balance Chemical Equations Quiz
- Printable Chemistry Worksheets
- Balancing Chemical Equations
- How to Balance Chemical Equations
- Balanced Equation Definition and Examples
- Mole Relations in Balanced Equations
- Examples of 10 Balanced Chemical Equations
- 5 Steps for Balancing Chemical Equations
- What Is a Chemical Reaction?
- What Is a Chemical Equation?
- Stoichiometry Definition in Chemistry
- 20 Free Spanish Worksheets to Help Test Your Knowledge
- Introduction To Stoichiometry
- Mole Ratio: Definition and Examples
- 20 Practice Chemistry Tests
- The Balanced Chemical Equation for Photosynthesis
- Reactant Definition and Examples
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.
|
Demand Analysis Class 12 Practice Paper with Answers
Demand Analysis Class 12 Practice Paper with Answers
What is Demand?
- Goods are demanded because they have utility
- Demand is the quantity of a commodity that a person is ready to buy at a particular price during a given period of time.
- In economics the term ‘Demand’ refers to a desire for a commodity backed by the ability to pay and willingness to pay for it.
- Demand = desire + ability to pay + willingness to pay
Explain the Types of demand
- Direct demand: – When goods and services are demanded to satisfy human wants directly, it is called direct demand. E.g. demand for food, clothes, computer, mobile, etc.
- Indirect demand / Derived demand: – When demand for one commodity gives rise to the demand for another commodity, it’s called indirect demand. E.g. raw materials, labour, machines, etc. are not demanded to serve directly but they are needed for the production of goods having direct demand.
- Joint demand / Complementary demand: – When two or more commodities are demanded at the same time to satisfy a single want, it is called joint demand. For e.g. car and petrol, pen and paper, toothbrush and toothpaste, etc.
- Composite demand: – When one commodity is demanded for number of uses, it is called composite demand. For e.g. electricity is demanded for lighting, cooking etc. or Milk is used for making tea, coffee, ice-creams, etc.
- Competitive demand: – When two goods are close substitutes of each other or when the demand for a commodity competes with its substitutes, it’s called competitive demand. E.g. tea or coffee, Pepsi or cola, etc.
What are the Determinants of demands/factors determining demand?/Explain the factor affecting demand
- Price: – Price is one of the most important factors that affect demand. When the price rises demand falls and when the price falls demand rises.
- Income: – Income is directly related to demand. If consumer income rises demand also rises and if consumer incomes fall demand also falls.
- Size of population: – An increase in size population leads to an increase in market demand for goods and services.
- Taste, habits and preference: – A change in taste also changes the demand for a commodity. E.g. demand for past food has increased in recent years.
- Price of complementary: – if goods are jointly demanded like car and petrol when the price of a car rises, the demand for car and petrol both will fall.
- Advertisement: – Powerful advertisements create demand for the product. E.g. consumers buy new products like shampoos, and soap due to attractive advertisements.
- Weather condition: – Weather also affects demand. E.g. raincoats have demand only in the rainy season or more ice-creams in summer.
- Expectation about future prices: – If consumers expect a fall in the price of a commodity in the near future, they will demand less at the present price and vice versa. It shows that expectations about future prices affect demand.
- Taxation policy: – Government’s taxation policy affects demand. For e.g., a change in income tax will change consumers’ disposable income and therefore demand.
Q.3. Distinguish between:
Desire and demand.
|Desire refers to a feeling or a want for something.||‘Demand’ refers to a desire for a commodity backed by the ability to pay and willingness to pay for it.|
|For Example Desire to own a luxurious sports car.||For example demand for Smartphone, Railway tickets, and Electric vehicles (EVs)|
|Desire may or may not be backed by financial power||Demand should be backed by financial power.|
|Desires are personal wants or preferences that may not consider economic factors, such as price or affordability.||Demand takes economic considerations into account. It factors in the willingness and ability to pay for a product or service at a specific price point.|
Expansion of Demand Contraction of Demand
|Expansion of Demand||Contraction of Demand|
|Demand of a commodity increases due to fall in price alone, other factors remaining constant.||Demand of a commodity decrease due to increase in price alone, other factor remaining constant.|
|When the price of a product decreases, consumers are willing and able to buy more of it.||When the price of a product increases, consumers are less willing or able to buy as much of it.|
|Extension of demand is often observed when there is a sale, price reduction, or promotional offer on a product.||Contraction of demand can happen when there is an increase in production costs, scarcity of resources, or when the product becomes less affordable for consumers.|
Increase in demand and Decrease in demand.
|Increase in Demand||Decrease in Demand|
|Increase in demand means a rise in demand due to changes in other factors, price remaining constant.||Decrease in demand means, a fall in demand due to changes in other factors, and price remaining constant.|
|The increase in demand can be caused by different factors such as an increase in consumer income, a change in consumer preferences or tastes, etc.||The decrease in demand can be caused by different factors such as a decrease in consumer income, a change in consumer preferences, availability of substitute products, etc.|
Define the Law of Demand, Explain its Assumptions and Exceptions
- The law of demand was introduced by Prof. Alfred Marshall in his book, ‘Principles of Economics’, which was published in 1890.
- The law explains the functional relationship between price and quantity demanded. Generally, most people demand more when the price is less or falls and demand less when the price is higher or rises.
Statement of law:–
- According to Prof. Alfred Marshall, “Other things being equal, higher the price of a commodity, smaller is the quantity demanded and lower the price of a commodity, larger is the quantity demanded.”
- In simple words, the law of demand shows the functional relationship between price and quantity demanded.
- D = f (P), Where D = demand , P = Price F = function
The Law of Demand can be explained with the help of the following demand schedule and demand curve.
|Price per Kg. (Rs.)||Demand (Kg.)|
Explanation of curve:-
- X-axis shows quantity of commodity demanded and Y-axis shows price of commodity.
- The demand curve slopes downwards from left to right.
- Demand curve shows the inverse relationship between price and quantity demand.
- This means that at higher the price less demanded and at lower price higher demanded.
Assumptions of Law of Demand
- No change in consumer income: – It assumes that there is no change in consumer income because if income increases consumers will demand more quantity even at a higher price.
- Size of population remains constant: – There should not be any change in the size of the population. Because a change in population will bring about a change in demand and vice versa.
- No change in the expectation of future price changes: – There not be any change in the expectations about the prices of goods in the future. If consumers expect that prices will rise or fall in the future, they will change their present demand through price is constant.
- No change in the price of substitute and complementary goods: – The price of substitute and complementary goods should remain the same. For e.g. if the price of tea rises, its demand will decrease but the demand for coffee will increase.
- Government policy remains constant: – No change in direct and indirect taxes imposed by the government. A change in income tax may cause changes in demand and vice-versa.
- No change in weather conditions: – Changes in weather conditions would affect the demand for certain goods, raincoats, woolen cloths, etc. So it assumed that weather conditions remain unchanged.
Exception of Law of demand:
- Giffen goods: – Inferior goods like bread, potatoes, etc. are those goods whose demand does not rise even if their price falls. Sir Robert in the 19th Century in England observed that when the price of inferior goods like bread fell, poor people purchased a small quantity of bread. This is because when their price fell, the real income of consumers increased and they brought more of some other commodities instead of demanding more of bread. This is known as Giffen’s paradox.
- Speculation: – When people speculate about rises in the price of goods in the future, they may buy more at the existing higher price. Likewise, when people speculate fall in the price of goods in the future; they will not buy more at the existing lower prices. They will wait for the price to fall in the future.
- Prestige goods: – Rich people buy costly things like diamonds, higher priced motor cars, and bungalows, etc. when their price is higher just to show off in society and vice-versa.
- Ignorance: – If the price of a product falls and if people are not aware of that, they do not buy more.
- Fashion: – if a commodity goes out of fashion, people do not buy more even if the price falls.
- Illusion: – With a fall in price, sometimes consumers feel that the product of quality is low and they do not want to buy more.
- Habitual goods: – Due to habitual consumption certain goods like tobacco, cigarettes etc. are purchased even if prices are rising. Thus it is an exception.
|
5 edition of Half you heard of fractions? found in the catalog.
|Statement||by Thomas K. and Heather Adamson|
|Series||A+ books. Fun with numbers|
|Contributions||Adamson, Heather, 1974-|
|LC Classifications||QA117 .A246 2012|
|The Physical Object|
|ISBN 10||9781429675567, 9781429678605|
|LC Control Number||2011043564|
Introduce Fractions. Elementary students are naturally concerned with fairness and getting the same size, or equal size, of a treat. This is an excellent place to start. It is usually best to show an answer using the simplest fraction (1 / 2 in this case). That is called Simplifying, or Reducing the Fraction. Numerator / Denominator. We call the top number the Numerator, it is the number of parts we have. We call the bottom number the Denominator, it is the number of parts the whole is divided into.. NumeratorDenominator.
What is half of 6? 3. Therefore, 3/6 is equal to 1/2. If we have 4 out of 6, is it less than half, equal to half, or more than half? The students agree that 4/6 is more than half because it is bigger than 3. Next, we need to look at the fraction 3/9. Nine is an odd number, so to take half of it, you will have a remainder. Let's think about 8 : Rose Monroe. Simplify improper fractions into mixed or whole numbers. Some fractions can simply be divided into a whole number, while others will not divide evenly. Numbers that don't divide evenly must be rewritten as a mixed number. To simplify an improper fraction, first divide the numerator by the denominator. For example, for the fraction 10 / 3, divide 10 by 3.; 3 goes into 10 three times (3 x 3 = 9 69%(89).
Whole fractions. So far, you've learned that a fraction is a part of a whole. For example, 3/4 means you have three parts out of four parts what if you had a fraction like this? 8/8. In this example, we have eight parts out of eight parts the top number and the bottom number of a fraction are the same, then the fraction is equal to 's because you have every part of the. But half and half of are not the same. A number. A fraction is a number: One half is a number, just as two and seventeen and ninety-eight point six are numbers. One half is more than zero and less than one (and, in fact, half way between those other numbers). One and a half is another number, half way between one and two.
Wind across the Everglades
The law of carriers, inn-keepers, warehousemen, and other depositories of goods for hire
Group activities for language learning
Cost impact of depot dosage forms of atypical versus typical antipsychotics
In harms way
Demand processing and performance in public service agencies
One minds eye
Introduction to the iAPX 286.
race with the wolves
Aquifer Geochemistry and Effects of Pumping on Ground-Water Quality at the Green Belt Parkway Well Field, Holbrook, Long Island, New York, U.S. Geological Survey, Water-Resources Investigations Report 01-4025, 2002.
A fine and private place
purification of sewage
How to survive and maybe even love health professions school
Think on these things.
New Testament commentary for English readers
Half You Heard of Fractions. (Fun with Numbers) [Adamson, Heather, Adamson, Thomas K., Olson, Tamara] on *FREE* shipping on qualifying offers. Half You Heard of Fractions.
(Fun with Numbers)Author: Heather Adamson, Thomas K. Adamson. Half You Heard of Fractions. (Fun with Numbers) Kindle Edition by Thomas K. and Heather Adamson (Author) Format: Kindle Edition. See all 6 formats and editions Hide other formats and editions.
Price New from Used from Price: $ Get this from a library. Half you heard of fractions?. [Thomas K Adamson; Heather Adamson] -- "Uses simple text and photographs to explain fractions"--Provided by publisher. Half You Heard of Fractions. (Fun With Numbers) Show Additional Categories Click to open expanded view Half You Heard of Fractions.
blocks, toy cars, and cupcakes to help kids practice skip counting (and solve simple problems). The back of each book has an answer key, glossary, a couple of extra problems, a further resources list, and a. Want to split an apple. Half for each. That's a fraction. A fraction is a part of something.
There are many other fractions, from big to small. Find out how we. There are many other fractions, from big to small. Find out how we use fractions every day in fun and surprising ways.
Learn more Half you heard of fractions? book Half You Heard of Fractions. in the. Half for each. That's a fraction. A fraction is a part of something.
There are many other fractions, from big to small. Find out how we use fractions every day in fun and surprising ways.
Kindle Book Adobe PDF eBook MB; Half You Heard of Fractions. Embed Copy and paste the code into your website. Half you heard of fractions by Thomas K. and Heather Adamson Students will be able to see multiple representations of modeled fractions in this book along with real life examples of how we encounter fractions in real life settings.
The Hershey's Milk Chocolate Bar Fractions Book / Mrs Half you Heard of Fractions Emily Haynes 1, views. Children's Books Read Aloud: Apple Fractions by Jerry Pallotta.
A great fraction read aloud to use for extension activities, many of which are included as suggested activities at the back of the book.
Inchworm and a Half by Elinor J. Pinczes In this story an inchworm happily measures various vegetables in the garden until one day she discovers that she cannot measure a cucumber that is more than two, but. The Book of Fractions Reading or writing fractions in words 7 F Write the following fractions in words: 1.
You can use words to refer to a part of a whole. So one whole has: 2 halves 3 thirds 4 quarters 5 fifths 6 sixths 7 sevenths 8 eighths 9 ninths 10 tenths 11 elevenths 12 twelfths 13 thirteenths 20 twentieths 30 thirtieths 50 fiftieths File Size: KB.
Fractions are an important aspect of math. For this reason, we have designed a math game to teach First Grade pupils about fractions in the simplest possible way. This game has integrated one of the most popular sports "golf" with "fractions," which makes the game all the more interesting.
When calculating half of a fraction, you are finding a fraction of a fraction. Fractions are composed of two integers, one stacked upon the other with a dash separating them. These two numbers - the top one termed the numerator and the bottom the denominator - make up.
Teacher Resources are online instructional tools created by teachers for teachers to help integrate trade books seamlessly into the classroom.
Among many great features, Teacher Resources include outside links to diverse media and provide information about text complexity. Half You Heard of Fractions. (Fun with Numbers) eBook: Adamson, Thomas K. and Heather: : Kindle Store. You can break numbers apart to make multiplying easier.
“Talk with your neighbor about how you could apply this statement to the problem six times one-half.” After a few moments, I called on Brendan. He said, “It works. You could break the six into twos, and then you do two times one-half three times.
Two times one-half is one. Great intro to fractions. The book reiterates that if you were a fraction you would be part of a whole. I would take that concept and have students use it in an activity that uses hands on manipulatives such as the burger fraction, pizza fractions, fraction snap cubes and similar type objects/5.
Using Measuring Cups and Spoons when you cook or bake is another way to introduce fractions. Kids LOVE to measure things -- you can show them how 1 cup of water is equal to (3) 1/3 cups of water. That concept of adding smaller fractions to create a whole item is fun practice for them.
More math activities that introduce fractions. Most children would have heard words like ‘half,’ since they can be used in many contexts, so it’s as much a vocabulary building exercise as a mathematical one. Using these fruit fractions Talk about the vocabulary of whole, halves and quarters as children look at the fraction pieces and place them on top of the ‘whole’ fruit mat.
Fractions books There are many fractions books. However, only one is good enough to be called "Illustrated Fractions" and this is the only book you will ever need to master fractions once and for all. Hi and thank you for visiting my website today.
My name is Jetser Carasco. Equivalent Fractions Clipart Set - 8 pieces of clip art in a pack or bundle for your worksheets or educational resources. All images or pictures are high resolution so you can have large illustrations of them and they'll still be clean and are in PNG format with a transparent backgr.The Wishing Club: A Story About Fractions - Donna Jo Napoli.
Fractions - Kirstin Sterling. Half You Heard Of Fractions? - Thomas K Adamson. Give Me Half - Stuart J Murphy. Breakfast Around The World - Joy Cowley. Inchworm and a Half - Elinor J Pinczes. The Watermelon Seed - Greg Pizzoli. Lorenzo, The Pizza Loving Lobster - Claire London.If you are serious about learning fractions, here is what you will learn in this fractions ebook: All the essentials of number theory that are crucial in understanding fractions Avoid common pitfalls and discover every trick and strategy that will help you master fractions.
|
In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (Positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps.
As in all division problems, one number, called the dividend, is divided by another, called the divisor, producing a result called the quotient. It enables computations involving arbitrarily large numbers to be performed by following a series of simple steps. The abbreviated form of long division is called short division, which is almost always used instead of long division when the divisor has only one digit. Chunking (also known as the partial quotients method or the hangman method) is a less mechanical form of long division prominent in the UK which contributes to a more holistic understanding of the division process.
Inexpensive calculators and computers have become the most common way to solve division problems, eliminating a traditional mathematical exercise, and decreasing the educational opportunity to show how to do so by paper and pencil techniques. (Internally, those devices use one of a variety of division algorithms, the faster ones among which rely on approximations and multiplications to achieve the tasks). In the United States, long division has been especially targeted for de-emphasis, or even elimination from the school curriculum, by reform mathematics, though traditionally introduced in the 4th or 5th grades.
In English-speaking countries, long division does not use the division slash ⟨∕⟩ or division sign ⟨÷⟩ symbols but instead constructs a tableau. The divisor is separated from the dividend by a right parenthesis ⟨)⟩ or vertical bar ⟨|⟩; the dividend is separated from the quotient by a vinculum (i.e., an overbar). The combination of these two symbols is sometimes known as a long division symbol or division bracket. It developed in the 18th century from an earlier single-line notation separating the dividend from the quotient by a left parenthesis.
The process is begun by dividing the left-most digit of the dividend by the divisor. The quotient (rounded down to an integer) becomes the first digit of the result, and the remainder is calculated (this step is notated as a subtraction). This remainder carries forward when the process is repeated on the following digit of the dividend (notated as 'bringing down' the next digit to the remainder). When all digits have been processed and no remainder is left, the process is complete.
An example is shown below, representing the division of 500 by 4 (with a result of 125).
125 (Explanations) 4)500 4 ( 4 × 1 = 4) 10 ( 5 - 4 = 1) 8 ( 4 × 2 = 8) 20 (10 - 8 = 2) 20 ( 4 × 5 = 20) 0 (20 - 20 = 0)
A more detailed breakdown of the steps goes as follows:
- Find the shortest sequence of digits starting from the left end of the dividend, 500, that the divisor 4 goes into at least once. In this case, this is simply the first digit, 5. The largest number that the divisor 4 can be multiplied by without exceeding 5 is 1, so the digit 1 is put above the 5 to start constructing the quotient.
- Next, the 1 is multiplied by the divisor 4, to obtain the largest whole number that is a multiple of the divisor 4 without exceeding the 5 (4 in this case). This 4 is then placed under and subtracted from the 5 to get the remainder, 1, which is placed under the 4 under the 5.
- Afterwards, the first as-yet unused digit in the dividend, in this case the first digit 0 after the 5, is copied directly underneath itself and next to the remainder 1, to form the number 10.
- At this point the process is repeated enough times to reach a stopping point: The largest number by which the divisor 4 can be multiplied without exceeding 10 is 2, so 2 is written above as the second leftmost quotient digit. This 2 is then multiplied by the divisor 4 to get 8, which is the largest multiple of 4 that does not exceed 10; so 8 is written below 10, and the subtraction 10 minus 8 is performed to get the remainder 2, which is placed below the 8.
- The next digit of the dividend (the last 0 in 500) is copied directly below itself and next to the remainder 2 to form 20. Then the largest number by which the divisor 4 can be multiplied without exceeding 20, which is 5, is placed above as the third leftmost quotient digit. This 5 is multiplied by the divisor 4 to get 20, which is written below and subtracted from the existing 20 to yield the remainder 0, which is then written below the second 20.
- At this point, since there are no more digits to bring down from the dividend and the last subtraction result was 0, we can be assured that the process finished.
If the last remainder when we ran out of dividend digits had been something other than 0, there would have been two possible courses of action:
- We could just stop there and say that the dividend divided by the divisor is the quotient written at the top with the remainder written at the bottom, and write the answer as the quotient followed by a fraction that is the remainder divided by the divisor.
- We could extend the dividend by writing it as, say, 500.000... and continue the process (using a decimal point in the quotient directly above the decimal point in the dividend), in order to get a decimal answer, as in the following example.
31.75 4)127.00 12 (12 ÷ 4 = 3) 07 (0 remainder, bring down next figure) 4 (7 ÷ 4 = 1 r 3) 3.0 (bring down 0 and the decimal point) 2.8 (7 × 4 = 28, 30 ÷ 4 = 7 r 2) 20 (an additional zero is brought down) 20 (5 × 4 = 20) 0
In this example, the decimal part of the result is calculated by continuing the process beyond the units digit, "bringing down" zeros as being the decimal part of the dividend.
This example also illustrates that, at the beginning of the process, a step that produces a zero can be omitted. Since the first digit 1 is less than the divisor 4, the first step is instead performed on the first two digits 12. Similarly, if the divisor were 13, one would perform the first step on 127 rather than 12 or 1.
Basic procedure for long division of n ÷ mEdit
- Find the location of all decimal points in the dividend n and divisor m.
- If necessary, simplify the long division problem by moving the decimals of the divisor and dividend by the same number of decimal places, to the right (or to the left), so that the decimal of the divisor is to the right of the last digit.
- When doing long division, keep the numbers lined up straight from top to bottom under the tableau.
- After each step, be sure the remainder for that step is less than the divisor. If it is not, there are three possible problems: the multiplication is wrong, the subtraction is wrong, or a greater quotient is needed.
- In the end, the remainder, r, is added to the growing quotient as a fraction, r/m.
Invariant property and correctnessEdit
The basic presentation of the steps of the process (above) focus on the what steps are to be performed, rather than the properties of those steps that ensure the result will be correct (specifically, that q × m + r = n, where q is the final quotient and r the final remainder). A slight variation of presentation requires more writing, and requires that we change, rather than just update, digits of the quotient, but can shed more light on why these steps actually produce the right answer by allowing evaluation of q × m + r at intermediate points in the process. This illustrates the key property used in the derivation of the algorithm (below).
Specifically, we amend the above basic procedure so that we fill the space after the digits of the quotient under construction with 0's, to at least the 1's place, and include those 0's in the numbers we write below the division bracket.
This lets us maintain an invariant relation at every step: q × m + r = n, where q is the partially-constructed quotient (above the division bracket) and r the partially-constructed remainder (bottom number below the division bracket). Note that, initially q=0 and r=n, so this property holds initially; the process reduces r and increases q with each step, eventually stopping when r<m if we seek the answer in quotient + integer remainder form.
Revisiting the 500 ÷ 4 example above, we find
125 (q, changes from 000 to 100 to 120 to 125 as per notes below) 4)500 400 ( 4 × 100 = 400) 100 (500 - 400 = 100; now q=100, r=100; note q×4+r = 500.) 80 ( 4 × 20 = 80) 20 (100 - 80 = 20; now q=120, r= 20; note q×4+r = 500.) 20 ( 4 × 5 = 20) 0 ( 20 - 20 = 0; now q=125, r= 0; note q×4+r = 500.)
Example with multi-digit divisorEdit
A divisor of any number of digits can be used. In this example, 1260257 is to be divided by 37. First the problem is set up as follows:
Digits of the number 1260257 are taken until a number greater than or equal to 37 occurs. So 1 and 12 are less than 37, but 126 is greater. Next, the greatest multiple of 37 less than or equal to 126 is computed. So 3 × 37 = 111 < 126, but 4 × 37 > 126. The multiple 111 is written underneath the 126 and the 3 is written on the top where the solution will appear:
3 37)1260257 111
Note carefully which place-value column these digits are written into. The 3 in the quotient goes in the same column (ten-thousands place) as the 6 in the dividend 1260257, which is the same column as the last digit of 111.
The 111 is then subtracted from the line above, ignoring all digits to the right:
3 37)1260257 111 15
Now the digit from the next smaller place value of the dividend is copied down and appended to the result 15:
3 37)1260257 111 150
The process repeats: the greatest multiple of 37 less than or equal to 150 is subtracted. This is 148 = 4 × 37, so a 4 is added to the top as the next quotient digit. Then the result of the subtraction is extended by another digit taken from the dividend:
34 37)1260257 111 150 148 22
The greatest multiple of 37 less than or equal to 22 is 0 × 37 = 0. Subtracting 0 from 22 gives 22, we often don't write the subtraction step. Instead, we simply take another digit from the dividend:
340 37)1260257 111 150 148 225
The process is repeated until 37 divides the last line exactly:
34061 37)1260257 111 150 148 225 222 37
Mixed mode long divisionEdit
mi - yd - ft - in 1 - 634 1 9 r. 15" 37) 50 - 600 - 0 - 0 37 22880 66 348 13 23480 66 348 1760 222 37 333 22880 128 29 15 ===== 111 348 == 170 === 148 22 66 ==
Each of the four columns is worked in turn. Starting with the miles: 50/37 = 1 remainder 13. No further division is possible, so perform a long multiplication by 1,760 to convert miles to yards, the result is 22,880 yards. Carry this to the top of the yards column and add it to the 600 yards in the dividend giving 23,480. Long division of 23,480 / 37 now proceeds as normal yielding 634 with remainder 22. The remainder is multiplied by 3 to get feet and carried up to the feet column. Long division of the feet gives 1 remainder 29 which is then multiplied by twelve to get 348 inches. Long division continues with the final remainder of 15 inches being shown on the result line.
Interpretation of decimal resultsEdit
When the quotient is not an integer and the division process is extended beyond the decimal point, one of two things can happen:
- The process can terminate, which means that a remainder of 0 is reached; or
- A remainder could be reached that is identical to a previous remainder that occurred after the decimal points were written. In the latter case, continuing the process would be pointless, because from that point onward the same sequence of digits would appear in the quotient over and over. So a bar is drawn over the repeating sequence to indicate that it repeats forever (i.e., every rational number is either a terminating or repeating decimal).
Notation in non-English-speaking countriesEdit
This section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
China, Japan, Korea use the same notation as English-speaking nations including India. Elsewhere, the same general principles are used, but the figures are often arranged differently.
In Latin America (except Argentina, Bolivia, Mexico, Colombia, Paraguay, Venezuela, Uruguay and Brazil), the calculation is almost exactly the same, but is written down differently as shown below with the same two examples used above. Usually the quotient is written under a bar drawn under the divisor. A long vertical line is sometimes drawn to the right of the calculations.
500 ÷ 4 = 125 (Explanations) 4 ( 4 × 1 = 4) 10 ( 5 - 4 = 1) 8 ( 4 × 2 = 8) 20 (10 - 8 = 2) 20 ( 4 × 5 = 20) 0 (20 - 20 = 0)
127 ÷ 4 = 31.75 124 30 (bring down 0; decimal to quotient) 28 (7 × 4 = 28) 20 (an additional zero is added) 20 (5 × 4 = 20) 0
In Mexico, the English-speaking world notation is used, except that only the result of the subtraction is annotated and the calculation is done mentally, as shown below:
125 (Explanations) 4)500 10 ( 5 - 4 = 1) 20 (10 - 8 = 2) 0 (20 - 20 = 0)
In Bolivia, Brazil, Paraguay, Venezuela, French-Speaking Canada, Colombia, and Peru, the European notation (see below) is used, except that the quotient is not separated by a vertical line, as shown below:
127|4 −124 31,75 30 −28 20 −20 0
In Spain, Italy, France, Portugal, Lithuania, Romania, Turkey, Greece, Belgium, Belarus, Ukraine, and Russia, the divisor is to the right of the dividend, and separated by a vertical bar. The division also occurs in the column, but the quotient (result) is written below the divider, and separated by the horizontal line. The same method is used in Iran, Vietnam, and Mongolia.
127|4 −124|31,75 30 −28 20 −20 0
In Cyprus, as well as in France, a long vertical bar separates the dividend and subsequent subtractions from the quotient and divisor, as in the example below of 6359 divided by 17, which is 374 with a remainder of 1.
6359|17 −51 |374 125 | −119 | 69| −68| 1|
Decimal numbers are not divided directly, the dividend and divisor are multiplied by a power of ten so that the division involves two whole numbers. Therefore, if one were dividing 12,7 by 0,4 (commas being used instead of decimal points), the dividend and divisor would first be changed to 127 and 4, and then the division would proceed as above.
In Austria, Germany and Switzerland, the notational form of a normal equation is used. <dividend> : <divisor> = <quotient>, with the colon ":" denoting a binary infix symbol for the division operator (analogous to "/" or "÷"). In these regions the decimal separator is written as a comma. (cf. first section of Latin American countries above, where it's done virtually the same way):
127 : 4 = 31,75 −12 07 −4 30 −28 20 −20 0
In the Netherlands, the following notation is used:
12 / 135 \ 11,25 12 15 12 30 24 60 60 0
In Finland, the Italian method detailed above was replaced by the Anglo-American one in the 1970s. In the early 2000s, however, some textbooks have adopted the German method as it retains the order between the divisor and the dividend.
Algorithm for arbitrary baseEdit
Every natural number can be uniquely represented in an arbitrary number base as a sequence of digits where for all , where is the number of digits in . The value of in terms of its digits and the base is
Let be the dividend and be the divisor, where is the number of digits in . If , then quotient and remainder . Otherwise, we iterate from , before stopping.
For each iteration , let be the quotient extracted so far, be the intermediate dividend, be the intermediate remainder, be the next digit of the original dividend, and be the next digit of the quotient. By definition of digits in base , . By definition of remainder, . All values are natural numbers. We initiate
the first digits of .
With every iteration, the three equations are true:
There only exists one such such that .
According to the definition of the remainder ,
For the left side of the inequality, we select the largest such that
There is always a largest such , because and if , then
but because , , , this is always true. For the right side of the inequality we assume there exists a smallest such that
Since this is the smallest that the inequality holds true, this must mean that for
which is exactly the same as the left side of the inequality. Thus, . As will always exist, so will equal to , and there is only one unique that is valid for the inequality. Thus we have proven the existence and uniqueness of .
The final quotient is and the final remainder is
In base 10, using the example above with and , the initial values and .
Thus, and .
In base 16, with and , the initial values are and .
Thus, and .
If one doesn't have the addition, subtraction, or multiplication tables for base b memorised, then this algorithm still works if the numbers are converted to decimal and at the end are converted back to base b. For example, with the above example,
with . The initial values are and .
Thus, and .
This algorithm can be done using the same kind of pencil-and-paper notations as shown in above sections.
d8f45 r. 5 12 ) f412df ea a1 90 112 10e 4d 48 5f 5a 5
If the quotient is not constrained to be an integer, then the algorithm does not terminate for . Instead, if then by definition. If the remainder is equal to zero at any iteration, then the quotient is a -adic fraction, and is represented as a finite decimal expansion in base positional notation. Otherwise, it is still a rational number but not a -adic rational, and is instead represented as an infinite repeating decimal expansion in base positional notation.
Calculation within the binary number system is simpler, because each digit in the course can only be 1 or 0 - no multiplication is needed as multiplication by either results in the same number or zero.
If this were on a computer, multiplication by 10 can be represented by a bit shift of 1 to the left, and finding reduces down to the logical operation , where true = 1 and false = 0. With every iteration , the following operations are done[clarification needed]:
For example, with and , the initial values are and .
|0||1||1011||0||1011 − 0 = 1011||0|
|1||1||10111||1||10111 − 1101 = 1010||1|
|10||0||10100||1||10100 − 1101 = 111||11|
|11||0||1110||1||1110 − 1101 = 1||111|
|100||1||11||0||11 − 0 = 11||1110|
Thus, and .
On each iteration, the most time-consuming task is to select . We know that there are possible values, so we can find using comparisons. Each comparison will require evaluating . Let be the number of digits in the dividend and be the number of digits in the divisor . The number of digits in . The multiplication of is therefore , and likewise the subtraction of . Thus it takes to select . The remainder of the algorithm are addition and the digit-shifting of and to the left one digit, and so takes time and in base , so each iteration takes , or just . For all digits, the algorithm takes time , or in base .
Long division of integers can easily be extended to include non-integer dividends, as long as they are rational. This is because every rational number has a recurring decimal expansion. The procedure can also be extended to include divisors which have a finite or terminating decimal expansion (i.e. decimal fractions). In this case the procedure involves multiplying the divisor and dividend by the appropriate power of ten so that the new divisor is an integer – taking advantage of the fact that a ÷ b = (ca) ÷ (cb) – and then proceeding as above.
- Weisstein, Eric W. "Long Division". MathWorld.
- "Islamic Mathematics". new.math.uiuc.edu. Retrieved 2016-03-31.
- Henry Briggs - Oxford Reference.
- Klein, Milgram. "The Role of Long Division in the K-12 Curriculum" (PDF). CiteSeer. Retrieved June 21, 2019.
- Nicholson, W. Keith (2012), Introduction to Abstract Algebra, 4th ed., John Wiley & Sons, p. 206.
- "Long Division Symbol", Wolfram MathWorld, retrieved 11 February 2016.
- Miller, Jeff (2010), "Symbols of Operation", Earliest Uses of Various Mathematical Symbols.
- Hill, John (1772) [First published 1712], Arithmetick both in the theory and practice (11th ed.), London: Straben et al., p. 200, retrieved 12 February 2016
- Ikäheimo, Hannele: Jakolaskuun ymmärrystä (in Finnish)
|
For at least this semester, we are just going to introduce SSL in this course. The programming of SSL is a little bit harder than the other programming we have done; and, we have limited time. So there is a short online quiz covering this topic, but no programming assignment.
The keys are a matched pair
- Messages encrypted with the public key can only be decrypted with the private key.
- Having the public key will not help decrypt a message. Thus, as long as the client is certain that the public key came from the intended server, then it is safe to assume that only the intended server will be able to decrypt the message.
In many cases, especially with HTTP, the message is only encrypted when data is sent from the client to the server. This is one reason why the credit card number is ‘X’ed out in the receipt – can’t send the credit card number from server to client.
Keys may be self signed for private activities.
Public servers usually have purchased certificates from a certificate authority.
A certificate authority is an entity (company) that can verify the authenticity of a public key. For example, when you are about to make an on-line purchase from the likes of Amazon.com, your web browser will send a message to the certificate authority asking if the supplied public key truly belongs to Amazon.com. If the answer is yes, then it is safe to send the credit card number.
Here is an example how a self signed certificate can be generated. It is done from Linux just because it was easy to find the program and instructions for how to do it in Unix/Linux.
$ openssl req -new -out certfile.pem -keyout keyfile.pem Generating a 1024 bit RSA private key ........++++++ ................................................++++++ writing new private key to 'keyfile.pem' Enter PEM pass phrase: Verifying - Enter PEM pass phrase: ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:Kansas Locality Name (eg, city) [Newbury]:Salina Organization Name (eg, company) [My Company Ltd]:Kansas State University Organizational Unit Name (eg, section) :Engineering Technology Common Name (eg, your name or your server's hostname) :timber.sal.ksu.edu Email Address :email@example.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password :. An optional company name :.
What do the Public Certificate and Private Key look like?:
$ cat keyfile.pem -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: DES-EDE3-CBC,F7FFBD69A863B27B oq2s6YBa+6XVk+sfhFwYQixYQnP1wDDPVFpf+gdFTQUZ7qkG+qeR23z9LqEiTm1H E5ZB8TW3m1PC8Bhz8EansTV9/Q2AqpgWduytuKX9mo1nEwjTQPx7vZpnb+JrcGB2 Ew2qp4NfU1sYVpXV+KO66TunsTdhvNyV1fH8r6Dgk9xruNfvoUB0WRRKDGZ17iaP 1GeGPnQWWDC7WWE2LYugz/LW5BSoZtwdYf2U/48F/SvAgf1MyPUExwBqYRinzjdo PP9MXMGPHJQJ9PLeGnIRqUAAU2p0NJB8tb8ZrwFWpK4Aa1B3I9cNiMa42L0mfcax Y10+0MMq4UcAIHkfdIOBbRN8m9lpM3haeAs9ppAewyG3MKII2DC+FsEsdYIBWRhL Mfi3WcUOwqrVHLL2Qf1d4QZS9MkYZahvKsz3iYGZkw2Le/BXy+0/esLFnCjDhEOA NLLrVRcpo+82bKjjeQf4yTxL6w++HmfWsWSSGgD+BLWx6geVDZsUS65XaNsUsHQ7 PPi2taqaTu+rHKbYBoTdZUi3gUHhzH5NlWBvOe3tyWMVtid+GgmI418ib5uOikYL c//IjhwrVzUL4+9raSVcHqFn+kOX/bGxbDzr5vJSJSDFfff2dwYAFvsPYK2ka5gZ rYdq2tGjrEQycNXksOqsVGv4JEsuEacXeQRpVqh6AOVEWbC0eTUA1bjo9wM6aywi FIqgr0lLIE7lvL9rW8mkPQ9Tl9lwrLZfqB3vcfmstDXfQqH/A9VEgjhbNHnJkZ3n MihuBRizFEbK/kZRbk0yVMiFU6HltIJUgJ5b06bLEpcz6wlHSBBxhA== -----END RSA PRIVATE KEY----- $ cat certfile.pem -----BEGIN CERTIFICATE REQUEST----- MIIB7DCCAVUCAQAwgasxCzAJBgNVBAYTAlVTMQ8wDQYDVQQIEwZLYW5zYXMxDzAN BgNVBAcTBlNhbGluYTEgMB4GA1UEChMXS2Fuc2FzIFN0YXRlIFVuaXZlcnNpdHkx HzAdBgNVBAsTFkVuZ2luZWVyaW5nIFRlY2hub2xvZ3kxGzAZBgNVBAMTEnRpbWJl ci5zYWwua3N1LmVkdTEaMBgGCSqGSIb3DQEJARYLdGltQGtzdS5lZHUwgZ8wDQYJ KoZIhvcNAQEBBQADgY0AMIGJAoGBAOTRJmntlJy7cf3N3yW0/1jSUoWROlVkaZfg Aojz59gKlEDMLtVn2DKYDexWe0AUV9gBEpHTguX5Vi322IpPjOvO/3n1kHrdgD5L Nnc9tYYe5fF0RKzisRz7HKu6aXXY6dNFJMVRj7cTg4uSh7IS5lJvDCjohEnPJYzF 2g8mSoSBAgMBAAGgADANBgkqhkiG9w0BAQQFAAOBgQC1BjorEY98HkW7ceyH9s3d EcFy6uFKXP2hFjCEesrW+N8lMdyrXYbyxffdE6ZpMEcNoYS9S0wxuwg1f7WjI/3S y+fA2yviU+7c7blBd7r/r8uaviJB3uMWTgWKdnKBsnqBRvUQcytSrflzANV0MHIq tVhFOv/lfqxQIha0m6BFQw== -----END CERTIFICATE REQUEST-----
- Limited support in pre-Python 2.6 built-in socket module
- ssl = socket.ssl(socket)
- Two methods: read(), write()
- Create a wrapper to it make easier to use (see below).
The basic-wrap.py program on page 328-329 is a good example of using a buffer to read from a socket. The buffer is implemented mainly to provide readline() functionality, which is already available for non-SSL socket connections. The implementation is not the easiest code to follow, but does give some interesting insights on how buffers are implemented.
The guiding principle of buffers is to read as much data as possible at one time while providing a framework to the application, which does not have performance penalties for frequent requests for small amounts of data. Thus, buffers attempt to improve performance by limiting the number of reads relative to the number of read requests from the application.
This is the primary public method of the class and it returns one line from the SSL encrypted socket connection. It calls self.read(1024) (described below) to get data in blocks. It uses the string.find() function to look for the new line character. If it finds the new line character, it returns the data for one line and puts the remainder of the data back in the buffer.
An argument could perhaps be made that this function is not needed and its code should be split between self._read() and self.readline(). Regardless, its function is to manage the buffer. If data remains in the buffer, then as much of the buffer that was requested or is available is returned. It calls self._read() when the buffer is empty.
This where the data is actually read from the socket. The socket.ssl.read() function works differently than the socket.recv() function that we have used before. The read() function can raise an exception of socket.SSL_ERROR_WANT_READ or socket.SSL_ERROR_WANT_WRITE rather than blocking until it is able to complete the read operation. Thus, this function loops trying to read from the socket until it either gets the data or it receives an indicator that the full message has already been received.
Explanation of less familiar Python statements from Topic 5 – SSL contains thoughts on the use of string slices in this program.
SSL was re-worked for Python 3 (available in Python 2.6). The improvements add a new module, ssl, that’s built atop the OpenSSL library. This new module provides more control over the protocol negotiated, the X.509 certificates used, and has better support for writing SSL servers (as opposed to just clients).
To use the new module, you must first create a TCP connection in the usual way and then pass it to the ssl.wrap_socket() function. It is possible to specify whether a certificate is required, and to obtain certificate info by calling the getpeercert() method.
See following example
while True: newsocket, fromaddr = listening_socket.accept() connstream = ssl.wrap_socket(newsocket, server_side = True, certfile = "mycertfile", keyfile = "mykeyfile", ssl_version = ssl.PROTOCOL_TLSv1) deal_with_client(connstream)
|
Great Depression, worldwide economic downturn that began in 1929 and lasted until about 1939. It was the longest and most severe depression ever experienced by the industrialized Western world, sparking fundamental changes in economic institutions, macroeconomic policy, and economic theory. Although it originated in the United States, the Great Depression caused drastic declines in output, severe unemployment, and acute deflation in almost every country of the world. Its social and cultural effects were no less staggering, especially in the United States, where the Great Depression represented the harshest adversity faced by Americans since the Civil War.
The timing and severity of the Great Depression varied substantially across countries. The Depression was particularly long and severe in the United States and Europe; it was milder in Japan and much of Latin America. Perhaps not surprisingly, the worst depression ever experienced by the world economy stemmed from a multitude of causes. Declines in consumer demand, financial panics, and misguided government policies caused economic output to fall in the United States, while the gold standard, which linked nearly all the countries of the world in a network of fixed currency exchange rates, played a key role in transmitting the American downturn to other countries. The recovery from the Great Depression was spurred largely by the abandonment of the gold standard and the ensuing monetary expansion. The economic impact of the Great Depression was enormous, including both extreme human suffering and profound changes in economic policy.
Timing and severity
The Great Depression began in the United States as an ordinary recession in the summer of 1929. The downturn became markedly worse, however, in late 1929 and continued until early 1933. Real output and prices fell precipitously. Between the peak and the trough of the downturn, industrial production in the United States declined 47 percent and real gross domestic product (GDP) fell 30 percent. The wholesale price index declined 33 percent (such declines in the price level are referred to as deflation). Although there is some debate about the reliability of the statistics, it is widely agreed that the unemployment rate exceeded 20 percent at its highest point. The severity of the Great Depression in the United States becomes especially clear when it is compared with America’s next worst recession of the 20th century, that of 1981–82, when the country’s real GDP declined just 2 percent and the unemployment rate peaked at less than 10 percent. Moreover, during the 1981–82 recession prices continued to rise, although the rate of price increase slowed substantially (a phenomenon known as disinflation).
The Depression affected virtually every country of the world. However, the dates and magnitude of the downturn varied substantially across countries. Table 1 shows the dates of the downturn and upturn in economic activity in a number of countries. Table 2 shows the peak-to-trough percentage decline in annual industrial production for countries for which such data are available. Great Britain struggled with low growth and recession during most of the second half of the 1920s. Britain did not slip into severe depression, however, until early 1930, and its peak-to-trough decline in industrial production was roughly one-third that of the United States. France also experienced a relatively short downturn in the early 1930s. The French recovery in 1932 and 1933, however, was short-lived. French industrial production and prices both fell substantially between 1933 and 1936. Germany’s economy slipped into a downturn early in 1928 and then stabilized before turning down again in the third quarter of 1929. The decline in German industrial production was roughly equal to that in the United States. A number of countries in Latin America fell into depression in late 1928 and early 1929, slightly before the U.S. decline in output. While some less-developed countries experienced severe depressions, others, such as Argentina and Brazil, experienced comparatively mild downturns. Japan also experienced a mild depression, which began relatively late and ended relatively early.
|country||depression began||recovery began|
The general price deflation evident in the United States was also present in other countries. Virtually every industrialized country endured declines in wholesale prices of 30 percent or more between 1929 and 1933. Because of the greater flexibility of the Japanese price structure, deflation in Japan was unusually rapid in 1930 and 1931. This rapid deflation may have helped to keep the decline in Japanese production relatively mild. The prices of primary commodities traded in world markets declined even more dramatically during this period. For example, the prices of coffee, cotton, silk, and rubber were reduced by roughly half just between September 1929 and December 1930. As a result, the terms of trade declined precipitously for producers of primary commodities.
The U.S. recovery began in the spring of 1933. Output grew rapidly in the mid-1930s: real GDP rose at an average rate of 9 percent per year between 1933 and 1937. Output had fallen so deeply in the early years of the 1930s, however, that it remained substantially below its long-run trend path throughout this period. In 1937–38 the United States suffered another severe downturn, but after mid-1938 the American economy grew even more rapidly than in the mid-1930s. The country’s output finally returned to its long-run trend path in 1942.
Recovery in the rest of the world varied greatly. The British economy stopped declining soon after Great Britain abandoned the gold standard in September 1931, although genuine recovery did not begin until the end of 1932. The economies of a number of Latin American countries began to strengthen in late 1931 and early 1932. Germany and Japan both began to recover in the fall of 1932. Canada and many smaller European countries started to revive at about the same time as the United States, early in 1933. On the other hand, France, which experienced severe depression later than most countries, did not firmly enter the recovery phase until 1938.
|
Timeline of neutron stars, pulsars, supernovae, and white dwarfs
Note that this list is mainly about the development of knowledge, but also about some supernovae taking place. For a separate list of the latter, see the article List of supernovae. All dates refer to when the supernova was observed on Earth or would have been observed on Earth had powerful enough telescopes existed at the time.
A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich. Except for black holes and some hypothetical objects, neutron stars are the smallest and densest currently known class of stellar objects. Neutron stars have a radius on the order of 10 kilometres (6 mi) and a mass of about 1.4 solar masses. They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei.
A supernova is a powerful and luminous explosion of a star. A supernova occurs during the last evolutionary stages of a massive star or when a white dwarf is triggered into runaway nuclear fusion. The original object, called the progenitor, either collapses to a neutron star or black hole, or is completely destroyed to form a diffuse nebula. The peak optical luminosity of a supernova can be comparable to that of an entire galaxy before fading over several weeks or months.
The Crab Nebula is a supernova remnant and pulsar wind nebula in the constellation of Taurus. The common name comes from William Parsons, 3rd Earl of Rosse, who observed the object in 1842 using a 36-inch (91 cm) telescope and produced a drawing that looked somewhat like a crab. The nebula was discovered by English astronomer John Bevis in 1731, and it corresponds with a bright supernova recorded by Chinese astronomers in 1054. The nebula was the first astronomical object identified that corresponds with a historical supernova explosion.
A pulsar is a highly magnetized rotating neutron star that emits beams of electromagnetic radiation out of its magnetic poles. This radiation can be observed only when a beam of emission is pointing toward Earth, and is responsible for the pulsed appearance of emission. Neutron stars are very dense and have short, regular rotational periods. This produces a very precise interval between pulses that ranges from milliseconds to seconds for an individual pulsar. Pulsars are one of the candidates for the source of ultra-high-energy cosmic rays.
The Crab Pulsar is a relatively young neutron star. The star is the central star in the Crab Nebula, a remnant of the supernova SN 1054, which was widely observed on Earth in the year 1054. Discovered in 1968, the pulsar was the first to be connected with a supernova remnant.
First observed between August 4 and August 6, 1181, Chinese and Japanese astronomers recorded the supernova now known as SN 1181 in eight separate texts. One of only nine supernovae in the Milky Way observable with the naked eye in recorded history, it appeared in the constellation Cassiopeia and was visible in the night sky for about 185 days.
PSR J0737−3039 is the first known double pulsar. It consists of two neutron stars emitting electromagnetic waves in the radio wavelength in a relativistic binary system. The two pulsars are known as PSR J0737−3039A and PSR J0737−3039B. It was discovered in 2003 at Australia's Parkes Observatory by an international team led by the Italian radio astronomer Marta Burgay during a high-latitude pulsar survey.
The following outline is provided as an overview of and topical guide to astronomy:
A Type Ia supernova is a type of supernova that occurs in binary systems in which one of the stars is a white dwarf. The other star can be anything from a giant star to an even smaller white dwarf.
A binary pulsar is a pulsar with a binary companion, often a white dwarf or neutron star. Binary pulsars are one of the few objects which allow physicists to test general relativity because of the strong gravitational fields in their vicinities. Although the binary companion to the pulsar is usually difficult or impossible to observe directly, its presence can be deduced from the timing of the pulses from the pulsar itself, which can be measured with extraordinary accuracy by radio telescopes.
The Vela Pulsar is a radio, optical, X-ray- and gamma-emitting pulsar associated with the Vela Supernova Remnant in the constellation of Vela. Its parent Type II supernova exploded approximately 11,000–12,300 years ago.
The known history of supernova observation goes back to 1006 CE. All earlier proposals for supernova observations are speculations with many alternatives.
Gamma-ray burst progenitors are the types of celestial objects that can emit gamma-ray bursts (GRBs). GRBs show an extraordinary degree of diversity. They can last anywhere from a fraction of a second to many minutes. Bursts could have a single profile or oscillate wildly up and down in intensity, and their spectra are highly variable unlike other objects in space. The near complete lack of observational constraint led to a profusion of theories, including evaporating black holes, magnetic flares on white dwarfs, accretion of matter onto neutron stars, antimatter accretion, supernovae, hypernovae, and rapid extraction of rotational energy from supermassive black holes, among others.
SN 1979C was a supernova about 50 million light-years away in Messier 100, a spiral galaxy in the constellation Coma Berenices. The Type II supernova was discovered April 19, 1979 by Gus Johnson, a school teacher and amateur astronomer. This type of supernova is known as a core collapse and is the result of the internal collapse and violent explosion of a large star. A star must have at least 9 times the mass of the Sun in order to undergo this type of collapse. The star that resulted in this supernova was estimated to be in the range of 20 solar masses.
Astrophysical X-ray sources are astronomical objects with physical properties which result in the emission of X-rays.
PSR J1614–2230 is a pulsar in a binary system with a white dwarf in the constellation Scorpius. It was discovered in 2006 with the Parkes telescope in a survey of unidentified gamma ray sources in the Energetic Gamma Ray Experiment Telescope catalog. PSR J1614–2230 is a millisecond pulsar, a type of neutron star, that spins on its axis roughly 317 times per second, corresponding to a period of 3.15 milliseconds. Like all pulsars, it emits radiation in a beam, similar to a lighthouse. Emission from PSR J1614–2230 is observed as pulses at the spin period of PSR J1614–2230. The pulsed nature of its emission allows for the arrival of individual pulses to be timed. By measuring the arrival time of pulses, astronomers observed the delay of pulse arrivals from PSR J1614–2230 when it was passing behind its companion from the vantage point of Earth. By measuring this delay, known as the Shapiro delay, astronomers determined the mass of PSR J1614–2230 and its companion. The team performing the observations found that the mass of PSR J1614–2230 is 1.97 ± 0.04 M☉. This mass made PSR J1614–2230 the most massive known neutron star at the time of discovery, and rules out many neutron star equations of state that include exotic matter such as hyperons and kaon condensates.
A stellar collision is the coming together of two stars caused by stellar dynamics within a star cluster, or by the orbital decay of a binary star due to stellar mass loss or gravitational radiation, or by other mechanisms not yet well understood.
A neutron star merger is a type of stellar collision.
PSR J0348+0432 is a pulsar–white dwarf binary system in the constellation Taurus. It was discovered in 2007 with the National Radio Astronomy Observatory's Robert C. Byrd Green Bank Telescope in a drift-scan survey.
A kilonova is a transient astronomical event that occurs in a compact binary system when two neutron stars or a neutron star and a black hole merge. These mergers are thought to produce gamma-ray bursts and emit bright electromagnetic radiation, called "kilonovae", due to the radioactive decay of heavy r-process nuclei that are produced and ejected fairly isotropically during the merger process. The measured high sphericity of the kilonova AT2017gfo at early epochs was deduced from the blackbody nature of its spectrum.
|
A subsidy in agriculture is a form of financial assistances granted by the government to help farmers increase their agricultural productivity and incomes. It is an economic policy instrument that governments use to improve the balance of supply-demand and to maintain stable prices of agricultural goods. Subsidies can come in the form of direct payments to farmers, or in the form of tax breaks or trader support. For example, some governments may provide direct payments to farmers in the form of subsidies, while others may subsidize the purchase of certain agricultural inputs such as fertilizers or equipment.
Subsidies are typically designed to target different areas of agricultural production. They can be used to help farmers manage risk, increase productivity, encourage the adoption of new technologies, and reduce farm costs. Subsidies are also used to encourage farmers to improve their environmental stewardship, by providing incentives for practices such as reduced tillage, diverted crop residues, and other methods of reducing soil erosion.
Subsidies can have a range of consequences. On one hand, they can encourage the adoption of more efficient and sustainable practices, reduce the risk of farm failure, and boost the incomes of environmentally responsible farmers. However, the stability of agricultural markets may be reduced, whereby subsidized farmers can bluff production to increase profits. Additionally, subsidies can create winners and losers within agricultural sectors and among countries, creating negative consequences for free trade policies.
Despite this, subsidies remain a major component of many countries’ agricultural policies. Typically, they are used by governments to achieve their development goals and combat agricultural distress. Subsidies can also be used to address a wide range of public objectives, including public health, environmental protection, food security, poverty reduction, and rural development.
In conclusion, subsidies in agriculture can be a useful economic policy instrument for governments to combat agricultural distress, improve the balance of supply-demand, and maintain stable prices of agricultural goods. They can also be used to encourage efficient and sustainable practices, reduce farm costs, increase productivity, and boost the incomes of environmentally responsible farmers.
How Can Subsidies Help Farmers?
Subsidies are a direct incentive to farmers that can help to maintain productivity and reduce the risk of farm failure. They can be used to offer incentives and rewards to farmers who adopt new, efficient and sustainable farming practices, while also providing support in the form of tax breaks and direct payments to help maintain existing levels of production. Subsidies can also help to reduce farmers’ production costs by reducing the cost of inputs, such as fertilizers, tools, and equipment. This can help farmers increase profitability, while allowing them to remain competitive in global markets.
Subsidies can also be used to encourage farmers to innovate in the use of new technologies, such as precision farming and data analytics to boost agricultural production. By offering financial assistance to experiment with novel methods, farmers can keep pace with evolving global markets, reducing the risk of becoming obsolete or unprofitable. Governments can also use subsidies to set production targets, providing a safety net for farmers in the event of a potential yield failure.
In conclusion, subsidies can be a powerful tool to help farmers remain productive, competitive and financially viable. They can help reduce farm costs and provide incentives for farmers to adopt more efficient and sustainable practices. Additionally, subsidies can enable farmers to innovate and experiment with new technologies and methods to keep up with changing markets.
What Are the Disadvantages of Subsidies?
The use of agricultural subsidies can have negative consequences on agricultural markets, leading to distortions in production and a loss of competition. As subsidies tend to incentivize excess production, farmers can increase their supply and create a surplus of agricultural goods, leading to market instability, increased food waste, and a decrease in prices. This can create unfair situations for traders or producers in other countries, as subsidies may be offered to domestic farmers that are not accessible to foreign ones. This can lead to an unbalanced and unfair trading environment.
Subsidies can also cause negative environmental consequences. In some cases, they can be misaligned with ecological objectives, as they may encourage the excessive use of technologies and inputs that can cause harm to the environment. Additionally, subsidies can reward farmers for inefficient practices that are detrimental to the environment.
In conclusion, subsidies can be a double-edged sword, as they can lead to unfair market conditions and damage to the environment. Subsidies can be misused to create market distortions, a decline in market competition, excessive food waste, and environmental damage.
What Are the Different Types of Subsidies?
The most common form of agricultural subsidies are direct payments, such as farm income support and disaster aid. These are typically provided to farmers in times of need, such as during periods of low crop revenue or extreme weather or market conditions. Tax breaks and other financial assistance are other forms of support the government may provide to help farmers deal with the costs of production or market fluctuations.
Subsidies can also be used to incentivize farmers to adopt certain practices, such as encouraging the use of more efficient machinery and equipment, or encouraging more sustainable farming methods. Other forms of subsidies may also be used to stimulate innovation and the adoption of new technologies in agriculture, such as precision farming, biotechnology, and data analytics.
In conclusion, there are various types of subsidies available to farmers, including direct payments, tax breaks, and other forms of financial assistance. Subsidies can also be used to incentivize farmers to adopt certain practices and to stimulate the use of new technologies.
Why Do Governments Provide Subsidies?
Governments use subsidies to manage the balance of supply and demand in agricultural markets, as well as to maintain stable prices of agricultural goods. They can also be used to stimulate innovation and help farmers remain productive and competitive in global markets. Subsidies can therefore be seen as a form of public investment in the agricultural sector to promote economic growth, reduce poverty, and ensure food security.
Subsidies are also used by governments to meet a range of desired public objectives, from public health and environmental protection, to poverty reduction and rural development. Governments may also choose to subsidize certain sectors of agriculture to increase production, such as the horticultural sector, to make up for shortcomings in domestic production.
In conclusion, governments provide subsidies to manage supply-demand balance, maintain stable prices, and promote innovation in the agricultural sector. They may also use subsidies to meet certain public objectives, such as public health, environmental protection, food security, poverty reduction, and rural development.
What Are the Alternatives to Subsidies?
As subsidies can lead to a range of negative consequences, some governments may decide to opt for alternative policies to fulfill their desired objectives. One alternative policy measure is for governments to implement tax incentives for farmers to adopt sustainable agricultural practices. This can help to promote environmentally responsible farming, without the need for subsidies.
Governments may also consider offering subsidies in the form of public investments, such as in the development of irrigation and drainage systems, or in infrastructure projects like roads and bridges, to stimulate economic growth in the agricultural sector. Additionally, governments may encourage farmers to cooperate in supply chain management and price negotiations to reduce farm costs and improve profitability.
In conclusion, there are numerous alternative policy measures available to governments to help farmers increase their agricultural productivity and incomes. These include tax incentives, public investments, and cooperative supply chain management initiatives.
|
In September an interstellar visitor was discovered in our solar system — the comet 2I/Borisov, which was imaged by astronomers as it approached the sun. Now, a new study has looked in depth at what the comet is made of.
The small body is called a “planetesimal” by astronomers because it has the potential to become a planet under the right gravitational conditions. Most planetesimals in our solar system are icy, like comets, but a small fraction of them are rocky. But there should be planetesimals observable from outside the solar system too, the authors said in the paper: “Assuming that similar [planet formation] processes have taken place elsewhere in the galaxy, a large number of planetesimals are wandering through interstellar space, some eventually crossing the solar system.”
Scientists aren’t sure whether other solar systems are the same as ours in terms of the ways which planets are configured. Studying interstellar objects like 2I/Borisov gives us the chance to see how planets are formed in other areas of the galaxy.
The first chance to study this issue came with the discovery of the ‘Oumuamua interstellar object, which captured the public’s imagination when it was spotted in our solar system last year. Its color and brightness suggested it was made of rock and metals and had no water or ice, but scientists were never able to determine what exactly it was made of.
With the new object 2I/Borisov, scientists were able to perform a spectroscopy analysis and find out which gases were present in the comet’s coma, or the ball of particles and gases around the center of the comet. This was difficult as the object is close to the sun, so the glare from sunlight makes it tricky to gather enough light from the object to perform an analysis. It took two tries, but the scientists were eventually able to gather data from the interstellar visitor.
They saw a distinct spike in the ultraviolet spectrum which corresponds to cyanogen gas, a combination of carbon and nitrogen. This gas is found in comets in our solar system too, so it’s not unexpected. However, there is an interesting note about the gas: it is given off as the object approaches the sun and is heated, causing gases to evaporate.
That means that as the object approaches the sun, it may give off more gases which could give clues to its makeup. It will make its closest pass to the sun in December, so watch for more information about our interstellar visitor then.
The research paper is available to view on pre-publication archive arXiv and will be published in the journal Astrophysical Journal Letters.
- Interstellar comet contains cyanide, similar to comets from our solar system
- Astronomers found a comet that may have come from another solar system
- First images of mysterious interstellar comet Borisov show familiar features
- The interstellar comet passing through our solar system gets an official name
- Image of rare blue comet captured by the European Southern Observatory
|
The longest homogeneous instrument-based temperature series in the world is the Central England Temperature record dating back to 1659. It was first constructed in the 1970s from an accumulation of measurements made by amateur meteorologists in central lowland England. The construction of a homogeneous record requires knowledge of how, and at what time of day, the measurements were made and how local conditions may account for regional variations. Other similar temperature records dating back to the mid 18th century are available for Munich, Vienna, Berlin and Paris among other sites in Europe and at least one in the Eastern United States. These datasets have been extensively analysed for periodic variations in temperature. Generally they show clear indications of variations on timescales of about 2.2 to 2.4 years and 2.9 to 3.9 years but on longer timescales individual records show peaks at different frequencies with little statistical significance.
The most complete record of rainfall comes from eastern China where careful observations of floods and droughts date back to the fifteenth century. Again spectral analysis results in periodicities which vary from place to place.
Figure 1 shows instrumental measurements of surface temperature compiled to produce a global average dating back to 1860. Much of the current concern with regard to global warming stems from the obvious rise over the twentieth century and a key concern of contemporary climate science is to attribute cause(s) to this warming. This is discussed further in Section 3.
Other climate records suggesting that the climate has been changing over the past century include the retreat of mountain glaciers, sea level rise, thinner Arctic ice sheets and an increased frequency of extreme precipitation events (IPCC, 2007).
Proxy data provide information about weather conditions at a particular location through records of a physical, biological or chemical response to these conditions. Some proxy datasets provide information dating back hundreds of thousands of years which make them particularly suitable for analysing long term climate variations and their correlation with solar activity.
One well established technique for providing proxy climate data is dendrochronology, or the study of climate changes by comparing the successive annual growth rings of trees (living or dead). It has been found that trees from any particular area show the same pattern of broad and narrow rings corresponding to the weather conditions under which they grew each year. Thus samples from old trees can be used to give a time series of these conditions. Felled logs can similarly be used to provide information back to ancient times, providing it is possible to date them. This is usually accomplished by matching overlapping patterns of rings from other trees. Another problem that arises with the interpretation of tree rings is that the annual growth of rings depends on a number of meteorological variables integrated over more than a year so that the dominant factor determining growth varies with location and type of tree. At high latitudes the major controlling factor is likely to be summer temperature but at lower latitudes humidity may play a greater role. Figure 2 shows a 1000-year surface temperature record reconstructed from proxy data, including tree rings. It shows that current temperatures are higher than they have been for at least the past millennium.
Much longer records of temperature have been derived from analysis of oxygen isotopes in ice cores obtained from Greenland and Antarctica. The ratio of the concentration of 18O to that of 16O, or 2H to 1H, in the water molecules is determined by the rate of evaporation of water from tropical oceans and also the rate of precipitation of snow over the polar ice caps. Both these factors are dependent on temperature such that greater proportions of the heavy isotopes are deposited during periods of higher global temperatures. As each year’s accumulation of snow settles the layers below become compacted so that at depths corresponding to an age of more than 800 years it becomes difficult to precisely date the layers. Nevertheless, variations on timescales of more than a decade have been extracted dating back over hundreds of thousands of years.
Figure 3 shows the temperature record deduced from the deuterium ratio in an ice core retrieved from Vostok in East Antarctica. The roughly 100,000 year periodicity of the transitions from glacial to warm epochs is clear and suggests a relationship with the variations in eccentricity of the Earth’s orbit around the Sun (see Section 4.1) although this does not explain the apparently sharp transitions from cold to warm periods.
Evidence of very long term temperature variations can also be obtained from ocean sediments. The skeletons of calciferous plankton make up a large proportion of the sediments at the bottom of the deep oceans and the 18O component is determined by the temperature of the upper ocean at the date when the living plankton absorbed carbon dioxide. The sediment accumulates slowly, at a rate of perhaps 1 m every 40,000 years, so that changes over periods of less than about 1,000 years are not detectable but ice age cycles every 100,000 years are clearly portrayed.
Ocean sediments have also been used to reveal a history of temperature in the North Atlantic by analysis of the minerals believed to have been deposited by drift ice (Bond et al., 2001). In colder climates the rafted ice propagates further south where it melts, depositing the minerals. An example of such an analysis is presented in Figure 4 and discussed in Section 2.2.
This work is licensed under a Creative Commons License.
|
Density and Specific Gravity - Sample Problems
Jump to: Rock and Mineral density
| Rock and mineral specific gravity
You can download the questions (Acrobat (PDF) 25kB Jul24 09)
if you would like to work them on a separate sheet of paper.
Calculating densities of rocks and minerals
Problem 1: You have a rock with a volume of 15cm3 and a mass of 45 g. What is its density?
Density is mass divided by volume, so that the density is 45 g divided by 15cm3
, which is 3.0 g/cm3
Problem 2: You have a different rock with a volume of 30cm3 and a mass of 60g. What is its density?
Density is mass divided by volume, so that the density is 60 g divided by 30cm3
, which is 2.0 g/cm3
Problem 3: In the above two examples which rock is heavier? Which is lighter?
The question is asking about heavier and lighter, which refers to mass or weight. Therefore, all you care about is the mass in grams and so the 60 g rock in the second problem is heavierand the 45 g rock (in the first question) is lighter.
Problem 4: In the above two examples which rock is more dense? which is less dense?
The question is asking about density, and that is the ratio of mass to volume. Therefore, the first rock is denser, (density = 3.0) and the second rock is less dense even though it weighs more, because its density is only 2.0. This example shows why it is important to be careful to not use the words heavier/lighter when you means more or less dense.
Problem 5: You decide you want to carry a boulder home from the beach. It is 30 centimeters on each side, and so has a volume of 27,000 cm3. It is made of granite, which has a typical density of 2.8 g/cm3. How much will this boulder weigh?
In this case, you are asked for a mass, not the density. You will need to rearrange the density equation so that you get mass.
By multiplying both sides by volume, mass will be left alone.
Substituting in the values from the problem,
The result is that the mass is 75,600 grams.
That is over 165 pounds!
Rocks are sometimes used along coasts to prevent erosion. If a rock needs to weigh 2,000 kilograms (about 2 tons) in order not to be shifted by waves, how big (what volume) does it need to be? You are using basalt, which has a typical density of 3200 kg/m3
In this problem you need a volume, so you will need to rearrange the density equation to get volume.
By multiplying both sides by volume, we can get volume out of the numerator (the bottom).
You can then divide both sides by density to get volume alone:
By substituting in the values listed above,
So the volume will be 0.625 m3
Note that the above problem shows that densities can be in units other than grams and cubic centimeters. To avoid the potential problems of different units, many geologists use specific gravity (SG), explored in problems 8 and 9, below.
Image from http://www.stat.wisc.edu/~ifischer/Collections/Fossils/rocks.html
A golden-colored cube is handed to you. The person wants you to buy it for $100, saying that is a gold nugget. You pull out your old geology text and look up gold in the mineral table, and read that its density is 19.3 g/cm3
. You measure the cube and find that it is 2 cm on each side, and weighs 40 g. What is its density? Is it gold? Should you buy it?
To determine the density you need the volume and the mass since
You know the mass (40 g), but the volume is not given. To find the volume, use the formula for the volume of a box
- volume = length x width x height.
The volume of the cube is
- 2cm x 2cm x 2cm = 8cm3.
The density then is the mass divided by the volume:
Thus the cube is NOT gold
, since the density (5.0 g/cm3
) is not the same as gold (19.3g/cm3
). You tell the seller to take a hike
. You might even notice that the density of pyrite (a.k.a. fool's gold) is 5.0 g/cm3
. Luckily you are no fool and know about density!
Calculating Specific Gravity of Rocks and Minerals
Problem 8: You have a sample of granite with density 2.8 g/cm3. The density of water is 1.0 g/cm3. What is the specific gravity of your granite?
Specific gravity is the density of the substance divided by the density of water, so
Note that the units cancel, so this answer has no units. We say "the number is unitless
Problem 9: You have a sample of granite with density 174.8 lbs/ft3. The density of water is 62.4 lbs/ft3. What is the specific gravity of the granite now?
Again, the specific gravity is the density of the substance divided by the density of water, so
This shows that the specific gravity does not change when measurements are made in different units, so long as the density of the object and the density of water are in the same units.
|
Whole genome sequencing
Whole genome sequencing (also known as full genome sequencing, complete genome sequencing, or entire genome sequencing) is a laboratory process that determines the complete DNA sequence of an organism's genome at a single time. This entails sequencing all of an organism's chromosomal DNA as well as DNA contained in the mitochondria and, for plants, in the chloroplast.
Whole genome sequencing should not be confused with DNA profiling, which only determines the likelihood that genetic material came from a particular individual or group, and does not contain additional information on genetic relationships, origin or susceptibility to specific diseases. Also unlike full genome sequencing, SNP genotyping covers less than 0.1% of the genome. Almost all truly complete genomes are of microbes; the term "full genome" is thus sometimes used loosely to mean "greater than 95%". The remainder of this article focuses on nearly complete human genomes.
High-throughput genome sequencing technologies have largely been used as a research tool and are currently being introduced in the clinics. In the future of personalized medicine, whole genome sequence data will be an important tool to guide therapeutic intervention. The tool of gene sequencing at SNP level is also used to pinpoint functional variants from association studies and improve the knowledge available to researchers interested in evolutionary biology, and hence may lay the foundation for predicting disease susceptibility and drug response.
Also, by aligning the sequenced genomes, can be obtained somatic mutations produced as base substitutions.
- 1 Cells used for sequencing
- 2 Mutation frequencies in cancers
- 3 Early techniques
- 4 Current techniques
- 5 Commercialization
- 6 Disruption to DNA array market
- 7 Sequencing versus analysis
- 8 Diagnostic use and societal impact
- 9 Ethical concerns
- 10 People with public genome sequences
- 11 See also
- 12 References
- 13 External links
Cells used for sequencing
Almost any biological sample containing a full copy of the DNA—even a very small amount of DNA or ancient DNA—can provide the genetic material necessary for full genome sequencing. Such samples may include saliva, epithelial cells, bone marrow, hair (as long as the hair contains a hair follicle), seeds, plant leaves, or anything else that has DNA-containing cells.
The genome sequence of a single cell selected from a mixed population of cells can be determined using techniques of single cell genome sequencing. This has important advantages in environmental microbiology in cases where a single cell of a particular microorganism species can be isolated from a mixed population by microscopy on the basis of its morphological or other distinguishing characteristics. In such cases the normally necessary steps of isolation and growth of the organism in culture may be omitted, thus allowing the sequencing of a much greater spectrum of organism genomes.
Single cell genome sequencing is being tested as a method of preimplantation genetic diagnosis, wherein a cell from the embryo created by in vitro fertilization is taken and analyzed before embryo transfer into the uterus. After implantation, cell-free fetal DNA can be taken by simple venipuncture from the mother and used for whole genome sequencing of the fetus.
Mutation frequencies in cancers
Whole genome sequencing has established the mutation frequency for whole human genomes. The mutation frequency in the whole genome between generations for humans (parent to child) is about 70 new mutations per generation. An even lower level of variation was found comparing whole genome sequencing in blood cells for a pair of monozygotic (identical twins) 100 year old centenarians. Only 8 somatic differences were found, though somatic variation occurring in less than 20% of blood cells would be undetected.
In the specifically protein coding regions of the human genome, it is estimated that there are about 0.35 mutations that would change the protein sequence between parent/child generations (less than one mutated protein per generation).
Cancers, however, have much higher mutation frequencies. The particular frequency depends on tissue type, whether there is a mis-match DNA repair deficiency, and exposure to DNA damaging agents such as UV-irradiation or components of tobacco smoke. Tuna and Amos have summarized the mutation frequencies per megabase (Mb), as shown in the table (along with the indicated frequencies of mutations per genome).
The high mutation frequencies in cancers reflect the genome instability characteristic of cancers.
|Cell type||Mutation frequency/Mb||Mutation frequency per diploid genome|
|Acute lymphocytic leukemia||0.3||1,800|
|Chronic lymphocytic leukemia||<1||<6,000|
|Microsatellite stable (MSS) colon cancer||2.8||16,800|
|Microsatellite instable (MSI) colon cancer (mismatch repair deficient)||47||282,000|
|Small cell lung cancer||7.4||44,400|
|Non-small cell lung cancer (smokers)||10.5||63,000|
|Non-small cell lung cancer (never-smokers)||0.6||3,600|
|Lung adenocarcinoma (smokers)||9.8||58,500|
|Lung adenocarcinoma (never-smokers)||1.7||10,200|
|Chronic UV-irradiation induced melanoma||111||666,000|
|Non-UV-induced melanoma of hairless skin of extremities||3-14||18,000-84,000|
|Non-UV-induced melanoma of hair-bearing skin||5-55||30,000-330,000|
Sequencing of nearly an entire human genome was first accomplished in 2000 partly through the use of shotgun sequencing technology. While full genome shotgun sequencing for small (4000–7000 base pair) genomes was already in use in 1979, broader application benefited from pairwise end sequencing, known colloquially as double-barrel shotgun sequencing. As sequencing projects began to take on longer and more complicated genomes, multiple groups began to realize that useful information could be obtained by sequencing both ends of a fragment of DNA. Although sequencing both ends of the same fragment and keeping track of the paired data was more cumbersome than sequencing a single end of two distinct fragments, the knowledge that the two sequences were oriented in opposite directions and were about the length of a fragment apart from each other was valuable in reconstructing the sequence of the original target fragment.
The first published description of the use of paired ends was in 1990 as part of the sequencing of the human HPRT locus, although the use of paired ends was limited to closing gaps after the application of a traditional shotgun sequencing approach. The first theoretical description of a pure pairwise end sequencing strategy, assuming fragments of constant length, was in 1991. In 1995 Roach et al. introduced the innovation of using fragments of varying sizes, and demonstrated that a pure pairwise end-sequencing strategy would be possible on large targets. The strategy was subsequently adopted by The Institute for Genomic Research (TIGR) to sequence the entire genome of the bacterium Haemophilus influenzae in 1995, and then by Celera Genomics to sequence the entire fruit fly genome in 2000, and subsequently the entire human genome. Applied Biosystems, now called Life Technologies, manufactured the automated capillary sequencers utilized by both Celera Genomics and The Human Genome Project.
While capillary sequencing was the first approach to successfully sequence a nearly full human genome, it is still too expensive and takes too long for commercial purposes. Because of this, since 2005 capillary sequencing has been progressively displaced by newer technologies such as pyrosequencing, SMRT sequencing, and nanopore technology; all of these new technologies nevertheless continue to employ the basic shotgun strategy, namely, parallelization and template generation via genome fragmentation.
Because the sequence data that is produced can be quite large (for example, there are approximately six billion base pairs in each human diploid genome), genomic data is stored electronically and requires a large amount of computing power and storage capacity. Full genome sequencing would have been nearly impossible before the advent of the microprocessor, computers, and the Information Age.
One possible way to accomplish the cost-effective high-throughput sequencing necessary to accomplish full genome sequencing is by using nanopore technology, which is a patented technology held by Harvard University and Oxford Nanopore Technologies and licensed to biotechnology companies. To facilitate their full genome sequencing initiatives, Illumina licensed nanopore sequencing technology from Oxford Nanopore Technologies and Sequenom licensed the technology from Harvard University.
Another possible way to accomplish cost-effective high-throughput sequencing is by utilizing fluorophore technology. Pacific Biosciences is currently using this approach in their SMRT (single molecule real time) DNA sequencing technology.
Pyrosequencing is a method of DNA sequencing based on the sequencing by synthesis principle. The technique was developed by Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm in 1996, and is currently being used by 454 Life Sciences as a basis for a full genome sequencing platform.
A number of public and private companies are competing to develop a full genome sequencing platform that is commercially robust for both research and clinical use, including Illumina, Knome, Sequenom, 454 Life Sciences, Pacific Biosciences, Complete Genomics, Helicos Biosciences, GE Global Research (General Electric), Affymetrix, IBM, Intelligent Bio-Systems, Life Technologies and Oxford Nanopore Technologies. These companies are heavily financed and backed by venture capitalists, hedge funds, and investment banks.
In October 2006, the X Prize Foundation, working in collaboration with the J. Craig Venter Science Foundation, established the Archon X Prize for Genomics, intending to award US$10 million to "the first Team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 1,000,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $1,000 per genome". An error rate of 1 in 1,000,000 bases, out of a total of approximately six billion bases in the human diploid genome, would mean about 6,000 errors per genome. The error rates required for widespread clinical use, such as predictive medicine is currently set by over 1,400 clinical single gene sequencing tests (for example, errors in BRCA1 gene for breast cancer risk analysis). As of August 2013[update], the Archon X Prize for Genomics has been cancelled.
In March 2009, it was announced that Complete Genomics has signed a deal with the Broad Institute to sequence cancer patients' genomes and will be sequencing five full genomes to start. In April 2009, Complete Genomics announced that it plans to sequence 1,000 full genomes between June 2009 and the end of the year and that they plan to be able to sequence one million full genomes per year by 2013.
In June 2009, Illumina announced that they were launching their own Personal Full Genome Sequencing Service at a depth of 30× for $48,000 per genome. Jay Flatley, Illumina's President and CEO, stated that "during the next five years, perhaps markedly sooner," the price point for full genome sequencing will fall from $48,000 to under $1,000.
In August 2009, the founder of Helicos Biosciences, Stephen Quake, stated that using the company's Single Molecule Sequencer he sequenced his own full genome for less than $50,000. He stated that he expects the cost to decrease to the $1,000 range within the next two to three years.
In August 2009, Pacific Biosciences secured an additional $68 million in new financing, bringing their total capitalization to $188 million. Pacific Biosciences said they are going to use this additional investment in order to prepare for the upcoming launch of their full genome sequencing service in 2010. Complete Genomics followed by securing another $45 million in a fourth round venture funding during the same month. Complete Genomics has also made the claim that it will sequence 10,000 full genomes by the end of 2010.
In October 2009, IBM announced that they were also in the heated race to provide full genome sequencing for under $1,000, with their ultimate goal being able to provide their service for US$100 per genome. IBM's full genome sequencing technology, which uses nanopores, is known as the "DNA Transistor".
In November 2009, Complete Genomics published a peer-reviewed paper in Science demonstrating its ability to sequence a complete human genome for $1,700. If true, this would mean the cost of full genome sequencing has come down exponentially within just a single year from around $100,000 to $50,000 and now to $1,700. This consumables cost was clearly detailed in the Science paper. However, Complete Genomics has previously released statements that it was unable to follow through on. For example, the company stated it would officially launch and release its service during the "summer of 2009", provide a "$5,000" full genome sequencing service by the "summer of 2009", and "sequence 1,000 genomes between June 2009 and the end of 2009" – all of which, as of November 2009, have not yet occurred. Complete Genomics launched its R&D human genome sequencing service in October 2008 and its commercial service in May 2010. The company sequenced 50 genomes in 2009. Since then, it has significantly increased the throughput of its genome sequencing factory and was able to sequence and analyze 300 genomes in Q3 2010.
Also in November 2009, Complete Genomics announced that it was beginning a large-scale human genome sequencing study of Huntington's disease (up to 100 genomes) with the Institute for Systems Biology.
In March 2010, Researchers from the Medical College of Wisconsin announced the first successful use of Genome Wide sequencing to change the treatment of a patient. This story was later retold in a Pulitzer prize winning article and touted as a significant accomplishment in the journal Nature and by the director of the NIH in presentations at congress.
In June 2010, Illumina lowered the cost of its individual sequencing service to $19,500 from $48,000.
In May 2011, Illumina lowered its Full Genome Sequencing service to $5,000 per human genome, or $4,000 if ordering 50 or more. Helicos Biosciences, Pacific Biosciences, Complete Genomics, Illumina, Sequenom, ION Torrent Systems, Halcyon Molecular, NABsys, IBM, and GE Global appear to all be going head to head in the race to commercialize full genome sequencing.
In January 2012, Life Technologies introduced a sequencer claimed to decode a human genome in one day for $1,000 although these claims have yet to be validated by customers on commercial devices. A UK firm spun out from Oxford University has come up with a DNA sequencing machine (the MinION) the size of a USB memory stick which costs $900 and can sequence smaller genomes (but not full human genomes in the first version). (While Oxford Nanopore stated in February that they would target having a sequencer in commercial early access by the end of 2012, this did not occur.)
In November 2012, Gene by Gene, Ltd started offering whole genome sequencing at an introductory price of $5,495 (with a minimum requirement of 3 samples per order). Currently the price is $6,995 and the minimum requirement has been removed.
A series of publications in 2012 showed the utility of SMRT sequencing from Pacific Biosciences in generating full genome sequences with de novo assembly. Some of these papers reported automated pipelines that could be used for generating these whole-genome assemblies. Other papers demonstrated how PacBio sequence data could be used to upgrade draft genomes to complete genomes.
Disruption to DNA array market
Full genome sequencing provides information on a genome that is orders of magnitude larger than that provided by the previous leader in genotyping technology, DNA arrays. For humans, DNA arrays currently provide genotypic information on up to one million genetic variants, while full genome sequencing will provide information on all six billion bases in the human genome, or 3,000 times more data. Because of this, full genome sequencing is considered a disruptive innovation to the DNA array markets as the accuracy of both range from 99.98% to 99.999% (in non-repetitive DNA regions) and their consumables cost of $5000 per 6 billion base pairs is competitive (for some applications) with DNA arrays ($500 per 1 million basepairs). Agilent, another established DNA array manufacturer, is working on targeted (selective region) genome sequencing technologies. It is thought that Affymetrix, the pioneer of array technology in the 1990s, has fallen behind due to significant corporate and stock turbulence and is currently not working on any known full genome sequencing approach. It is unknown what will happen to the DNA array market once full genome sequencing becomes commercially widespread, especially as companies and laboratories providing this disruptive technology start to realize economies of scale. It is postulated, however, that this new technology may significantly diminish the total market size for arrays and any other sequencing technology once it becomes commonplace for individuals and newborns to have their full genomes sequenced.
Sequencing versus analysis
In principle, full genome sequencing can provide raw data on all six billion nucleotides in an individual's DNA. However, it does not provide an analysis of what that information means or how it might be utilized in various clinical applications, such as in medicine to help prevent disease. As of 2010 the companies that are working on providing full genome sequencing provide clinical CLIA certified data (Illumina) and analytical services for the interpretation of the full genome data (Knome), with only one institution offering sequencing and analysis in a clinical setting. Nevertheless there is plenty of room for researchers or companies to improve such analyses and make it useful to physicians and patients.
Diagnostic use and societal impact
Inexpensive, time-efficient full genome sequencing will be a major accomplishment not only for the field of genomics, but for the entire human civilization because, for the first time, individuals will be able to have their entire genome sequenced. Utilizing this information, it is speculated that health care professionals, such as physicians and genetic counselors, will eventually be able to use genomic information to predict what diseases a person may get in the future and attempt to either minimize the impact of that disease or avoid it altogether through the implementation of personalized, preventive medicine. Full genome sequencing will allow health care professionals to analyze the entire human genome of an individual and therefore detect all disease-related genetic variants, regardless of the genetic variant's prevalence or frequency. This will enable the rapidly emerging medical fields of predictive medicine and personalized medicine and will mark a significant leap forward for the clinical genetic revolution. Full genome sequencing is clearly of great importance for research into the basis of genetic disease and has shown significant benefit to a subset of individuals with rare disease in the clinical setting. Illumina's CEO, Jay Flatley, stated in February 2009 that "A complete DNA read-out for every newborn will be technically feasible and affordable in less than five years, promising a revolution in healthcare" and that "by 2019 it will have become routine to map infants' genes when they are born". This potential use of genome sequencing is highly controversial, as it runs counter to established ethical norms for predictive genetic testing of asymptomatic minors that have been well established in the fields of medical genetics and genetic counseling. The traditional guidelines for genetic testing have been developed over the course of several decades since it first became possible to test for genetic markers associated with disease, prior to the advent of cost-effective, comprehensive genetic screening. It is established that norms, such as in the sciences and the field of genetics, are subject to change and evolve over time. It is unknown whether traditional norms practiced in medical genetics today will be altered by new technological advancements such as full genome sequencing.
Currently available newborn screening for childhood diseases allows detection of rare disorders that can be prevented or better treated by early detection and intervention. Specific genetic tests are also available to determine an etiology when a child's symptoms appear to have a genetic basis. Full genome sequencing, in addition has the potential to reveal a large amount of information (such as carrier status for autosomal recessive disorders, genetic risk factors for complex adult-onset diseases, and other predictive medical and non-medical information) that is currently not completely understood, may not be clinically useful to the child during childhood, and may not necessarily be wanted by the individual upon reaching adulthood. In addition to predicting disease risk in childhood, genetic testing may have other benefits (such as discovery of non-paternity) but may also have potential downsides (genetic discrimination, loss of anonymity, and psychological impacts). Many publications regarding ethical guidelines for predictive genetic testing of asymptomatic minors may therefore have more to do with protecting minors and preserving the individual's privacy and autonomy to know or not to know their genetic information, than with the technology that makes the tests themselves possible.
Due to recent cost reductions (see above) whole genome sequencing has become a realistic application in DNA diagnostics. In 2013, the 3Gb-TEST consortium obtained funding from the European Union to prepare the health care system for these innovations in DNA diagnostics. Quality assessment schemes, Health technology assessment and guidelines have to be in place. The 3Gb-TEST consortium has identified the analysis and interpretation of sequence data as the most complicated step in the diagnostic process. At the Consortium meeting in Athens in September 2014, the Consortium coined the word genotranslation for this crucial step. This step leads to a so-called genoreport. Guidelines are needed to determine the required content of these reports.
The majority of ethicists insist that the privacy of individuals undergoing genetic testing must be protected under all circumstances. Data obtained from whole genome sequencing can not only reveal much information about the individual who is the source of DNA, but it can also reveal much probabilistic information about the DNA sequence of close genetic relatives. Furthermore, the data obtained from whole genome sequencing can also reveal much useful predictive information about the relatives present and future health risks. This raises important questions about what obligations, if any, are owed to the family members of the individuals who are undergoing genetic testing. In our Western/European society, tested individuals are usually encouraged to share important information on the genetic diagnosis with their close relatives since the importance of the genetic diagnosis for offspring and other close relatives is usually one of the reasons for seeking a genetic testing in the first place. Nevertheless, Sijmons et al. (2011) also mention that a major ethical dilemma can develop when the patients refuse to share information on a diagnosis that is made for serious genetic disorder that is highly preventable and where there is a high risk to relatives carrying the same disease mutation. Under such circumstances, the clinician may suspect that the relatives would rather know of the diagnosis and hence the clinician can face a conflict of interest with respect to patient-doctor confidentiality.
Another major privacy concern is the scientific need to put information on patient's genotypes and phenotypes into the public scientific databases such as the locus specific databases. Although only anonymous patient data are submitted to the locus specific databases, patients might still be identifiable by their relatives in the case of finding a rare disease or a rare missense mutation.
People with public genome sequences
The first nearly complete human genomes sequenced were J. Craig Venter's (American at 7.5-fold average coverage) in 2007, followed by James Watson's (American at 7.4-fold), a Han Chinese (YH at 36-fold), a Yoruban from Nigeria (at 30-fold), a female leukemia patient (at 33 and 14-fold coverage for tumor and normal tissues), and Seong-Jin Kim (Korean at 29-fold). The first two persons with their full genome sequenced, James Watson and Craig Venter, two American scientists of European ancestry, were found to be genetically more closely related to and having more alleles in common with Korean scientist, Seong-Jin Kim (1,824,482 and 1,736,340, respectively) than with each other (1,715,851). As of June 2012[update], there are 69 nearly complete human genomes publicly available. Steve Jobs also had his genome sequenced for $100,000. Commercialization of full genome sequencing is in an early stage and growing rapidly.
- Kijk magazine, 01 January 2009
- Gilissen (Jul 2014). "Genome sequencing identifies major causes of severe intellectual disability". Nature 511 (7509): 344–7. doi:10.1038/nature13394. PMID 24896178.
- Nones, K; Waddell, N; Wayte, N; Patch, AM; Bailey, P; Newell, F; Holmes, O; Fink, JL; Quinn, MC; Tang, YH; Lampe, G; Quek, K; Loffler, KA; Manning, S; Idrisoglu, S; Miller, D; Xu, Q; Waddell, N; Wilson, PJ; Bruxner, TJ; Christ, AN; Harliwong, I; Nourse, C; Nourbakhsh, E; Anderson, M; Kazakoff, S; Leonard, C; Wood, S; Simpson, PT; Reid, LE; Krause, L; Hussey, DJ; Watson, DI; Lord, RV; Nancarrow, D; Phillips, WA; Gotley, D; Smithers, BM; Whiteman, DC; Hayward, NK; Campbell, PJ; Pearson, JV; Grimmond, SM; Barbour, AP (29 October 2014). "Genomic catastrophes frequently arise in esophageal adenocarcinoma and drive tumorigenesis". Nature communications 5: 5224. doi:10.1038/ncomms6224. PMID 25351503.
- van El, CG; Cornel, MC; Borry, P; Hastings, RJ; Fellmann, F; Hodgson, SV; Howard, HC; Cambon-Thomsen, A; Knoppers, BM; Meijers-Heijboer, H; Scheffer, H; Tranebjaerg, L; Dondorp, W; de Wert, GM (June 2013). "Whole-genome sequencing in health care. Recommendations of the European Society of Human Genetics". European journal of human genetics : EJHG. 21 Suppl 1: S1–5. PMID 23819146.
- Mooney, Sean (Sep 2014). "Progress towards the integration of pharmacogenomics in practice". Human Genetics. doi:10.1007/s00439-014-1484-7. PMID 25238897.
- Fareed, M., Afzal, M (2013) "Single nucleotide polymorphism in genome-wide association of human population: A tool for broad spectrum service". Egyptian Journal of Medical Human Genetics 14: 123–134. http://dx.doi.org/10.1016/j.ejmhg.2012.08.001.
- Braslavsky, Ido et al. (2003). "Sequence information can be obtained from single DNA molecules". Proc Natl Acad Sci USA 100 (7): 3960–3984. doi:10.1073/pnas.0230489100. PMC 153030. PMID 12651960.
- Single-cell Sequencing Makes Strides in the Clinic with Cancer and PGD First Applications from Clinical Sequencing News. By Monica Heger. October 02, 2013
- Yurkiewicz, I. R.; Korf, B. R.; Lehmann, L. S. (2014). "Prenatal whole-genome sequencing--is the quest to know a fetus's future ethical?". New England Journal of Medicine 370 (3): 195–7. doi:10.1056/NEJMp1215536. PMID 24428465.
- Roach JC; Glusman G; Smit AF et al. (April 2010). "Analysis of genetic inheritance in a family quartet by whole-genome sequencing". Science 328 (5978): 636–9. doi:10.1126/science.1186802. PMC 3037280. PMID 20220176.
- Campbell CD; Chong JX; Malig M et al. (November 2012). "Estimating the human mutation rate using autozygosity in a founder population". Nat. Genet. 44 (11): 1277–81. doi:10.1038/ng.2418. PMC 3483378. PMID 23001126.
- Ye K; Beekman M; Lameijer EW; Zhang Y; Moed MH; van den Akker EB; Deelen J; Houwing-Duistermaat JJ; Kremer D; Anvar SY; Laros JF; Jones D; Raine K; Blackburne B; Potluri S; Long Q; Guryev V; van der Breggen R; Westendorp RG; 't Hoen PA; den Dunnen J; van Ommen GJ; Willemsen G; Pitts SJ; Cox DR; Ning Z; Boomsma DI; Slagboom PE (December 2013). "Aging as accelerated accumulation of somatic variants: whole-genome sequencing of centenarian and middle-aged monozygotic twin pairs". Twin Res Hum Genet 16 (6): 1026–32. doi:10.1017/thg.2013.73. PMID 24182360.
- Keightley PD (February 2012). "Rates and fitness consequences of new mutations in humans". Genetics 190 (2): 295–304. doi:10.1534/genetics.111.134668. PMC 3276617. PMID 22345605.
- Tuna M; Amos CI (November 2013). "Genomic sequencing in cancer". Cancer Lett. 340 (2): 161–70. doi:10.1016/j.canlet.2012.11.004. PMID 23178448.
- Staden R (June 1979). "A strategy of DNA sequencing employing computer programs". Nucleic Acids Res. 6 (7): 2601–10. doi:10.1093/nar/6.7.2601. PMC 327874. PMID 461197.
- Edwards, A; Caskey, T (1991). "Closure strategies for random DNA sequencing". Methods: A Companion to Methods in Enzymology 3 (1): 41–47. doi:10.1016/S1046-2023(05)80162-8.
- Edwards A; Voss H; Rice P; Civitello A; Stegemann J; Schwager C; Zimmermann J; Erfle H; Caskey CT; Ansorge W (April 1990). "Automated DNA sequencing of the human HPRT locus". Genomics 6 (4): 593–608. doi:10.1016/0888-7543(90)90493-E. PMID 2341149.
- Roach JC; Boysen C; Wang K; Hood L (March 1995). "Pairwise end sequencing: a unified approach to genomic mapping and sequencing". Genomics 26 (2): 345–53. doi:10.1016/0888-7543(95)80219-C. PMID 7601461.
- Fleischmann RD; Adams MD; White O; Clayton RA; Kirkness EF; Kerlavage AR; Bult CJ; Tomb JF; Dougherty BA; Merrick JM; McKenney; Sutton; Fitzhugh; Fields; Gocyne; Scott; Shirley; Liu; Glodek; Kelley; Weidman; Phillips; Spriggs; Hedblom; Cotton; Utterback; Hanna; Nguyen; Saudek et al. (July 1995). "Whole-genome random sequencing and assembly of Haemophilus influenzae Rd". Science 269 (5223): 496–512. Bibcode:1995Sci...269..496F. doi:10.1126/science.7542800. PMID 7542800.
- Adams, MD et al. (2000). "The genome sequence of Drosophila melanogaster". Science 287 (5461): 2185–95. Bibcode:2000Sci...287.2185.. doi:10.1126/science.287.5461.2185. PMID 10731132.
- Mukhopadhyay R (February 2009). "DNA sequencers: the next generation". Anal. Chem. 81 (5): 1736–40. doi:10.1021/ac802712u. PMID 19193124.
- "Harvard University and Oxford Nanopore Technologies Announce Licence Agreement to Advance Nanopore DNA Sequencing and other Applications". Nanotechwire. August 5, 2008. Retrieved 2009-02-23.
- "Illumina and Oxford Nanopore Enter into Broad Commercialization Agreement". Reuters. January 12, 2009. Retrieved 2009-02-23.
- [dead link]
- "Single Molecule Real Time (SMRT) DNA Sequencing". Pacific Biosciences. Retrieved 2009-02-23.[dead link]
- "Complete Human Genome Sequencing Technology Overview" (PDF). Complete Genomics. 2009. Retrieved 2009-02-23.[dead link]
- "Definition of pyrosequencing from the Nature Reviews Genetics Glossary". Retrieved 2008-10-28.
- Ronaghi M; Uhlén M; Nyrén P (July 1998). "A sequencing method based on real-time pyrophosphate". Science 281 (5375): 363, 365. doi:10.1126/science.281.5375.363. PMID 9705713.
- Ronaghi M; Karamohamed S; Pettersson B; Uhlén M; Nyrén P (November 1996). "Real-time DNA sequencing using detection of pyrophosphate release". Anal. Biochem. 242 (1): 84–9. doi:10.1006/abio.1996.0432. PMID 8923969.
- Nyrén P (2007). "The history of pyrosequencing". Methods Mol. Biol. 373: 1–14. doi:10.1385/1-59745-377-3:1. ISBN 1-59745-377-3. PMID 17185753.
- "Article : Race to Cut Whole Genome Sequencing Costs Genetic Engineering & Biotechnology News — Biotechnology from Bench to Business". Genengnews.com. Retrieved 2009-02-23.
- "Whole Genome Sequencing Costs Continue to Drop". Eyeondna.com. Retrieved 2009-02-23.
- Harmon, Katherine (2010-06-28). "Genome Sequencing for the Rest of Us". Scientific American. Retrieved 2010-08-13.
- San Diego/Orange County Technology News. "Sequenom to Develop Third-Generation Nanopore-Based Single Molecule Sequencing Technology". Freshnews.com. Retrieved 2009-02-24.
- "Article : Whole Genome Sequencing in 24 Hours Genetic Engineering & Biotechnology News — Biotechnology from Bench to Business". Genengnews.com. Retrieved 2009-02-23.
- "Pacific Bio lifts the veil on its high-speed genome-sequencing effort". VentureBeat. Retrieved 2009-02-23.
- "Bio-IT World". Bio-IT World. 2008-10-06. Retrieved 2009-02-23.
- "With New Machine, Helicos Brings Personal Genome Sequencing A Step Closer". Xconomy. 2008-04-22. Retrieved 2011-01-28.
- "Whole genome sequencing costs continue to fall: $300 million in 2003, $1 million 2007, $60,000 now, $5000 by year end". Nextbigfuture.com. 2008-03-25. Retrieved 2011-01-28.
- "Han Cao's nanofluidic chip could cut DNA sequencing costs dramatically". Technology Review.
- John Carroll (2008-07-14). "Pacific Biosciences gains $100M for sequencing tech". FierceBiotech. Retrieved 2009-02-23.
- Sibley, Lisa (2009-02-08). "Complete Genomics brings radical reduction in cost". Silicon Valley / San Jose Business Journal (Sanjose.bizjournals.com). Retrieved 2009-02-23.
- Carlson, Rob (2007-01-02). "A Few Thoughts on Rapid Genome Sequencing and The Archon Prize — synthesis". Synthesis.cc. Retrieved 2009-02-23.
- "PRIZE Overview: Archon X PRIZE for Genomics".
- Bentley DR (December 2006). "Whole-genome re-sequencing". Curr. Opin. Genet. Dev. 16 (6): 545–552. doi:10.1016/j.gde.2006.10.009. PMID 17055251.
- Diamandis, Peter. "Outpaced by Innovation: Canceling an XPRIZE". Huffington Post.
- "SOLiD System — a next-gen DNA sequencing platform announced". Gizmag.com. 2007-10-27. Retrieved 2009-02-24.
- "The $1000 Genome: Coming Soon?". Dddmag.com. 2010-04-01. Retrieved 2011-01-28.
- "Complete Genomics, Broad Institute Forge Cancer Sequencing Collaboration". Bio-IT World. Retrieved 2011-01-28.
- Walsh, Fergus (2009-04-08). "Era of personalised medicine awaits". BBC News. Retrieved 2010-05-03.
- "Individual genome sequencing — Illumina, Inc.". Everygenome.com. Retrieved 2011-01-28.
- "Illumina launches personal genome sequencing service for $48,000 : Genetic Future". Scienceblogs.com. Retrieved 2011-01-28.
- "Illumina demos concept iPhone app for genetic data sharing". mobihealthnews. 2009-06-10. Retrieved 2011-01-28.
- Wade, Nicholas (2009-08-11). "Cost of Decoding a Genome Is Lowered". The New York Times. Retrieved 2010-05-03.
- Camille Ricketts (2009-08-13). "Pacific Biosciences takes $68M as genome sequencing becomes more competitive". VentureBeat. Retrieved 2011-01-28.
- "Pacific Biosciences Raises Additional $68 Million in Financing". FierceBiotech. 2009-08-12. Retrieved 2011-01-28.
- "Silicon Valley startup Complete Genomics promises low-cost DNA sequencing". San Jose Mercury News. Mercurynews.com. Retrieved 2011-01-28.
- "Silicon Valley Startup Complete Genomics Promises Low-Cost DNA Sequencing". Istockanalyst.com. 2009-08-24. Retrieved 2011-01-28.
- Jacquin Niles. "Explaining Sequencing | The Daily Scan". GenomeWeb. Retrieved 2011-01-28.
- "NHGRI Awards More than $50M for Low-Cost DNA Sequencing Tech Development". Genome Web. 2009.
- JOHN MARKOFF (October 5, 2009). "I.B.M. Joins Pursuit of $1,000 Personal Genome". The Newyork Times. Retrieved May 15, 2013.
- Shankland, Stephen (2009-10-06). "IBM Research jumps into genetic sequencing | Deep Tech". CNET News. News.cnet.com. Retrieved 2011-01-28.
- [dead link]
- Drmanac R, Sparks AB, Callow MJ et al.: Human genome sequencing using unchained base reads on self-assembling DNA nanoarrays" Science 327(5961), 78-81 (2010)
- "Broad Institute to use Complete Genomics to sequence genomes of cancer patients : Genetic Future". Scienceblogs.com. Retrieved 2011-01-28.
- "Five Thousand Bucks for Your Genome". Technology Review. 2008-10-20. Retrieved 2009-02-23.
- "One In A Billion: A boy's life, a medical mystery".
- "US clinics quietly embrace whole-genome sequencing".
- Herper, Matthew (2010-06-03). "Your Genome is Coming". Forbes. Retrieved 2010-08-13.
- Lauerman, John (2009-02-05). "Complete Genomics Drives Down Cost of Genome Sequence to $5,000". Bloomberg.com. Retrieved 2011-01-28.
- "Illumina Announces $5,000 Genome Pricing".
- "Products". dnadtc.com. Retrieved 28 November 2012.
- "Gene By Gene Launches DNA DTC". The Wall Street Journal. 29 November 2012. Retrieved 29 November 2012.
- Vorhaus, Dan (29 November 2012). "DNA DTC: The Return of Direct to Consumer Whole Genome Sequencing". genomicslawreport.com. Retrieved 30 November 2012.
- "Finished bacterial genomes from shotgun sequence data" (PDF).
- Koren, Sergey (July 2012). "Hybrid error correction and de novo assembly of single-molecule sequencing reads". NatureBiotechnology 30 (7): 693–700. doi:10.1038/nbt.2280. PMID 22750884.
- "Mind the Gap:Upgrading Genomes with Pacific Biosciences RS Long-Read Sequencing Technology".
- "Illumina Sequencer Enables $1,000 Genome". News: Genomics & Proteomics. Gen. Eng. Biotechnol. News (paper) 34 (4). 15 February 2014. p. 18.
- "Genomics Core". Gladstone.ucsf.edu. Retrieved 2009-02-23.
- Nishida N; Koike A; Tajima A; Ogasawara Y; Ishibashi Y; Uehara Y; Inoue I; Tokunaga K (2008). "Evaluating the performance of Affymetrix SNP Array 6.0 platform with 400 Japanese individuals". BMC Genomics 9 (1): 431. doi:10.1186/1471-2164-9-431. PMC 2566316. PMID 18803882.
- Petrone, Justin. "Illumina, DeCode Build 1M SNP Chip; Q2 Launch to Coincide with Release of Affy's 6.0 SNP Array | BioArray News | Arrays". GenomeWeb. Retrieved 2009-02-23.
- "Agilent Technologies Announces Licensing Agreement with Broad Institute to Develop Genome-Partitioning Kits to Streamline Next-Generation Sequencing".[dead link]
- "Affymetrix stock slumps 30% on forecast". Sacramento Business Journal (Sacramento.bizjournals.com). 2008-07-25. Retrieved 2009-02-23.
- Bluis, John (2006-04-24). "Affymetrix Gets Chipped Again". Fool.com. Retrieved 2009-02-23.
- "The chips are down". Nature 444 (7117): 256–7. November 2006. Bibcode:2006Natur.444..256.. doi:10.1038/444256a. PMID 17108930.
- Coombs A (October 2008). "The sequencing shakeup". Nat. Biotechnol. 26 (10): 1109–12. doi:10.1038/nbt1008-1109. PMID 18846083.
- "Following Diagnostic Sequencing Success, MCW Creates Comprehensive Framework to Guide Future Cases".
- "The Wall Street Journal—Video". The Wall Street Journal.
- Ashley EA; Butte AJ; Wheeler MT; Chen R; Klein TE; Dewey FE; Dudley JT; Ormond KE; Pavlovic A; Morgan AA; Pushkarev D; Neff NF; Hudgins L; Gong L; Hodges LM; Berlin DS; Thorn CF; Sangkuhl K; Hebert JM; Woon M; Sagreiya H; Whaley R; Knowles JW; Chou MF; Thakuria JV; Rosenbaum AM; Zaranek AW; Church GM; Greely HT; Quake SR; Altman RB (May 2010). "Clinical assessment incorporating a personal genome". Lancet 375 (9725): 1525–35. doi:10.1016/S0140-6736(10)60452-7. PMC 2937184. PMID 20435227.
- "Genomes, Environments, Traits (GET) Evidence".
- Ng SB; Buckingham KJ; Lee C et al. (January 2010). "Exome sequencing identifies the cause of a mendelian disorder". Nat. Genet. 42 (1): 30–5. doi:10.1038/ng.499. PMC 2847889. PMID 19915526.
- Hannibal MC; Buckingham KJ; Ng SB et al. (July 2011). "Spectrum of MLL2 (ALR) mutations in 110 cases of Kabuki syndrome". Am. J. Med. Genet. A 155A (7): 1511–6. doi:10.1002/ajmg.a.34074. PMC 3121928. PMID 21671394.
- Worthey EA; Mayer AN; Syverson GD et al. (March 2011). "Making a definitive diagnosis: successful clinical application of whole exome sequencing in a child with intractable inflammatory bowel disease". Genet. Med. 13 (3): 255–62. doi:10.1097/GIM.0b013e3182088158. PMID 21173700.
- Goh V; Helbling D; Biank V; Jarzembowski J; Dimmock D (June 2011). "Next Generation Sequencing Facilitates The Diagnosis In A Child With Twinkle Mutations Causing Cholestatic Liver Failure". J Pediatr Gastroenterol Nutr 54 (2): 291–4. doi:10.1097/MPG.0b013e318227e53c. PMID 21681116.
- Henderson, Mark (2009-02-09). "Genetic mapping of babies by 2019 will transform preventive medicine". London: Times Online. Retrieved 2009-02-23.
- McCabe LL; McCabe ER (June 2001). "Postgenomic medicine. Presymptomatic testing for prediction and prevention". Clin Perinatol 28 (2): 425–34. doi:10.1016/S0095-5108(05)70094-4. PMID 11499063.
- Nelson RM; Botkjin JR; Kodish ED et al. (June 2001). "Ethical issues with genetic testing in pediatrics". Pediatrics 107 (6): 1451–5. doi:10.1542/peds.107.6.1451. PMID 11389275.
- Borry P; Fryns JP; Schotsmans P; Dierickx K (February 2006). "Carrier testing in minors: a systematic review of guidelines and position papers". Eur. J. Hum. Genet. 14 (2): 133–8. doi:10.1038/sj.ejhg.5201509. PMID 16267502.
- Borry P; Stultiens L; Nys H; Cassiman JJ; Dierickx K (November 2006). "Presymptomatic and predictive genetic testing in minors: a systematic review of guidelines and position papers". Clin. Genet. 70 (5): 374–81. doi:10.1111/j.1399-0004.2006.00692.x. PMID 17026616.
- Mesoudi A; Danielson P (August 2008). "Ethics, evolution and culture". Theory Biosci. 127 (3): 229–40. doi:10.1007/s12064-008-0027-y. PMID 18357481.
- Ehrlich PR; Levin SA (June 2005). "The evolution of norms". PLoS Biol. 3 (6): e194. doi:10.1371/journal.pbio.0030194. PMC 1149491. PMID 15941355.
- Mayer AN; Dimmock DP; Arca MJ et al. (March 2011). "A timely arrival for genomic medicine". Genet. Med. 13 (3): 195–6. doi:10.1097/GIM.0b013e3182095089. PMID 21169843.
- Ayday E; De Cristofaro E; Hubaux JP; Tsudik G (2015). "The Chills and Thrills of Whole Genome Sequencing". ArXiv Repository. arXiv:1306.1264. Bibcode:2015arXiv1306.1264.
- Borry, P.; Evers-Kiebooms, G.; Cornel, MC; Clarke, A; Dierickx, K; Public Professional Policy Committee (PPPC) of the European Society of Human Genetics (ESHG) (2009). "Genetic testing in asymptomatic minors Background considerations towards ESHG Recommendations". Eur J Hum Genet 17 (6): 711–9. doi:10.1038/ejhg.2009.25. PMC 2947094. PMID 19277061.
- "Introducing diagnostic applications of ‘3Gb-testing’ in human genetics".
- "Beyond public health genomics: proposals from an international working group". Eur J Public Health 24: 877–879. Aug 2014. doi:10.1093/eurpub/cku142. PMID 25168910.
- "RD-Connect News: 18 July 2014, Issue 7".
- Sijmons, R.H; Van Langen, I.M (2011). "A clinical perspective on ethical issues in genetic issues". Accountability in Research: Policies and Quality Assurance 18 (3): 148–162. doi:10.1080/08989621.2011.575033.
- Sijmons, R.H.; Van Langen, I.M (2011). "A clinical perspective on ethical issues in genetic testing". Accountability in Research: Policies and Quality Assurance 18 (3): 148–162. doi:10.1080/08989621.2011.575033.
- McGuire, Amy, L; Caulfield, Timothy (2008). "Science and Society: Research ethics and the challenge of whole-genome sequencing". Nature Reviews: Genetics 9 (2): 152–156. doi:10.1038/nrg2302.
- Wade, Nicholas (September 4, 2007). "In the Genome Race, the Sequel Is Personal". New York Times. Retrieved February 22, 2009.
- Nature. "Access : All about Craig: the first 'full' genome sequence". Nature. Retrieved 2009-02-24.
- Levy S; Sutton G; Ng PC; Feuk L; Halpern AL; Walenz BP; Axelrod N; Huang J; Kirkness EF; Denisov G; Lin Y; MacDonald JR; Pang AW; Shago M; Stockwell TB; Tsiamouri A; Bafna V; Bansal V; Kravitz SA; Busam DA; Beeson KY; McIntosh TC; Remington KA; Abril JF; Gill J; Borman J; Rogers YH; Frazier ME; Scherer SW; Strausberg RL; Venter JC (September 2007). "The diploid genome sequence of an individual human". PLoS Biol. 5 (10): e254. doi:10.1371/journal.pbio.0050254. PMC 1964779. PMID 17803354.
- Wade, Wade (June 1, 2007). "DNA pioneer Watson gets own genome map". International Herald Tribune. Retrieved February 22, 2009.
- Wade, Nicholas (May 31, 2007). "Genome of DNA Pioneer Is Deciphered". New York Times. Retrieved February 21, 2009.
- Wheeler DA; Srinivasan M; Egholm M; Shen Y; Chen L; McGuire A; He W; Chen YJ; Makhijani V; Roth GT; Gomes X; Tartaro K; Niazi F; Turcotte CL; Irzyk GP; Lupski JR; Chinault C; Song XZ; Liu Y; Yuan Y; Nazareth L; Qin X; Muzny DM; Margulies M; Weinstock GM; Gibbs RA; Rothberg JM (2008). "The complete genome of an individual by massively parallel DNA sequencing". Nature 452 (7189): 872–6. Bibcode:2008Natur.452..872W. doi:10.1038/nature06884. PMID 18421352.
- Wang J; Wang, Wei; Li, Ruiqiang; Li, Yingrui; Tian, Geng; Goodman, Laurie; Fan, Wei; Zhang, Junqing; Li, Jun; Zhang, Juanbin, Juanbin; Guo, Yiran, Yiran; Feng, Binxiao, Binxiao; Li, Heng, Heng; Lu, Yao, Yao; Fang, Xiaodong, Xiaodong; Liang, Huiqing, Huiqing; Du, Zhenglin, Zhenglin; Li, Dong, Dong; Zhao, Yiqing, Yiqing; Hu, Yujie, Yujie; Yang, Zhenzhen, Zhenzhen; Zheng, Hancheng, Hancheng; Hellmann, Ines, Ines; Inouye, Michael, Michael; Pool, John, John; Yi, Xin, Xin; Zhao, Jing, Jing; Duan, Jinjie, Jinjie; Zhou, Yan, Yan et al. (2008). "The diploid genome sequence of an Asian individual". Nature 456 (7218): 60–65. Bibcode:2008Natur.456...60W. doi:10.1038/nature07484. PMC 2716080. PMID 18987735.
- Bentley DR; Balasubramanian S et al. (2008). "Accurate whole human genome sequencing using reversible terminator chemistry". Nature 456 (7218): 53–9. Bibcode:2008Natur.456...53B. doi:10.1038/nature07517. PMC 2581791. PMID 18987734.
- Ley TJ; Mardis ER; Ding L; Fulton B; McLellan MD; Chen K; Dooling D; Dunford-Shore BH; McGrath S; Hickenbotham M; Cook L; Abbott R; Larson DE; Koboldt DC; Pohl C; Smith S; Hawkins A; Abbott S; Locke D; Hillier LW; Miner T; Fulton L; Magrini V; Wylie T; Glasscock J; Conyers J; Sander N; Shi X; Osborne JR et al. (2008). "DNA sequencing of a cytogenetically normal acute myeloid leukaemia genome". Nature 456 (7218): 66–72. Bibcode:2008Natur.456...66L. doi:10.1038/nature07485. PMC 2603574. PMID 18987736.
- Ahn SM; Kim TH; Lee S; Kim D; Ghang H; Kim D; Kim BC; Kim SY; Kim WY; Kim C; Park D; Lee YS; Kim S; Reja R; Jho S; Kim CG; Cha JY; Kim KH; Lee B; Bhak J; Kim SJ (2009). "The first Korean genome sequence and analysis: Full genome sequencing for a socio-ethnic group". Genome Research 19 (9): 1622–9. doi:10.1101/gr.092197.109. PMC 2752128. PMID 19470904.
- Barbujani, Guido; Pigliucci, Massimo (2013). "Human races" (PDF). Current Biology 23 (5): R185–R187. doi:10.1016/j.cub.2013.01.024. ISSN 0960-9822. PMID 23473555. Retrieved 2 December 2013.
What does this imply for the existence of human races? Basically, that people with similar genetic features can be found in distant places, and that each local population contains a vast array of genotypes. Among the first genomes completely typed were those of James Watson and Craig Venter, two U.S. geneticists of European origin; they share more alleles with Seong-Jin Kim, a Korean scientist (1,824,482 and 1,736,340, respectively) than with each other (1,715,851).
- "Complete Human Genome Sequencing Datasets to its Public Genomic Repository".
- Lohr, Steve (2011-10-20). "New Book Details Jobs's Fight Against Cancer". The New York Times.
- Archon X Prize for Genomics
- James Watson's Personal Genome Sequence
- AAAS/Science: Genome Sequencing Poster
- Outsmart Your Genes: Book that discusses full genome sequencing and its impact upon health care and society
- Whole genome linkage analysis
|
Implicit variation (or implicit differentiation) is a powerful technique for finding derivatives of certain equations. In this review article, we’ll see how to use the method of implicit variation on AP Calculus problems.
What is Implicit Variation?
The usual differentiation rules, such as power rule, chain rule, and the others, apply only to functions of the form y = f(x). In other words, you have to start with a function f that is written only in terms of the variable x.
But what if you want to know the slope at a point on a circle whose equation is x2 + y2 = 16, for example?
Here, it would be possible to solve the equation for y and then proceed to take a derivative. However, that’s not really the best way!
For one thing, that square root will make finding the derivative more challenging.
For another, you really have two separate functions — one using the (+), and the other using minus (-) in front of the radical! Which one should you use for finding the derivative? Well, that depends on whether you want the top or bottom semicircle.
An Easier Way
It would be so much simpler to just work with the original equation (x2 + y2 = 16 in this discussion) rather than to solve it out for y.
Well we’re in luck! The method of implicit variation does exactly that!
The Method of Implicit Variation (Differentiation)
Given: an equation involving both x and y.
Goal: To find an expression for the derivative, dy/dx.
- Apply the derivative operation to both sides. This means that you should write d/dx before both sides of your equation. This is like an instruction to indicate that you’ll be doing derivatives in the next step.
- When taking derivatives, treat expressions of x alone as usual. However, if there are any expressions of y, then you must treat y as an unknown function of x. In particular, follow these additional rules:
Basically, whenever you take a derivative of a term involving y, then you must tack on dy/dx.
- Solve (algebraically) for the unknown derivative, dy/dx. At this point, every problem may be different, but there are a few common themes that I’ve found over the years that seem to apply fairly often.
- Group terms that have a factor of dy/dx on the left side of the equation. Those terms without the derivative should end up on the right side.
- Factor out by the common dy/dx.
- Divide by the expression in front of dy/dx
Keep in mind, your final answer may involve both x and y.
Where Do those dy/dx Factors Come From?
This is something that had bugged me for a long time after first learning the method myself. Why do we have to tack on an “extra” dy/dx when taking derivatives involving y?
The big idea here is that y is actually a function. We just have no idea what that function is!
In a typical (or explicit) function, such as y = x3 – 3x + 2, y has already been isolated. In this example, we know that the function is f(x) = x3 – 3x + 2.
However, an implicit equation has not been solved for y. In fact, it may be impossible to do so!
So we do the next best thing, which is simply to use our rules of calculus, including the Chain Rule, whenever we encounter the unknown function y in our equation.
That’s where the extra dy/dx comes from. There is a hidden Chain Rule lurking in the background!
Example — Free Response
Consider the equation x2 – 2xy + 4y2 = 52.
(a) Write an expression for the slope of the curve at any point (x, y).
(b) Find the equations of the tangent lines to the curve at the point x = 2.
(c) Find at (0, √(13)).
(a) Slope and the Derivative
The keyword slope indicates that we must find a derivative. It would be way too difficult to solve the equation explicitly for y. So this is a job for implicit differentiation!
First, apply d/dx to both sides.
The next few steps are just working out the derivatives. Perhaps the trickiest part is the term involving 2xy. Think of that as the product of 2x with an unknown function y = f(x). That way, it may make more sense why we must use the Product Rule for that term.
Finally, solve for the unknown derivative algebraically. Don’t forget to group, factor and divide!
We can factor out a common factor of 2 on top and bottom to get a final answer:
(b) Finding the Tangent Lines
There’s a clue in the word lines. You should expect there to be more than one answer.
First find the y-coordinate(s) that correspond to x = 2. We do this by plugging x = 2 into the original equation.
So there are two solutions: y = 4 and y = -3. This means that there are two different points at which we must find a tangent line. At each point, plug in the (x, y) pair into dy/dx from part (a) to find the slope.
Point 1: (2, 4). Slope = .
Therefore, using point-slope form for the line, we get y = (1/7)(x – 2) + 4.
Point 2: (2, -3). Slope = .
Again using point-slope form, we find y = (5/14)(x – 2) – 3.
(c) Implicit Second Derivatives
To find the second derivative of an implicit function, you must take a derivative of the first derivative (of course!).
However, all of the same peculiar rules about expressions of y still apply.
Note that we are using the Quotient Rule to start things off.
Now, the good news is that we don’t have to simplify the expression any further. This is because they are looking for a numerical final answer. So we just have to plug in the given (x, y) coordinates.
But what about the two spots where “dy/dx” shows up?
Well we already have an expression for dy/dx from part (a). Simply plug in your (x, y) coordinates to find dy/dx…
…and now you can plug that into the second derivative expression as well.
On the AP Calculus AB or BC exam, you will need to know the following.
- How to find the derivative of an implicitly-defined function using the Method of implicit variation (a.k.a. implicit differentiation).
- What the derivative means in terms of slope and how to find tangent lines to a curve defined implicitly.
- How to compute second derivatives of implicitly-defined functions.
More from Clemmonsdogpark
About Shaun Ault
Shaun earned his Ph. D. in mathematics from The Ohio State University in 2008 (Go Bucks!!). He received his BA in Mathematics with a minor in computer science from Oberlin College in 2002. In addition, Shaun earned a B. Mus. from the Oberlin Conservatory in the same year, with a major in music composition. Shaun still loves music -- almost as much as math! -- and he (thinks he) can play piano, guitar, and bass. Shaun has taught and tutored students in mathematics for about a decade, and hopes his experience can help you to succeed!
Leave a Reply
Clemmonsdogpark blog comment policy: To create the best experience for our readers, we will approve and respond to comments that are relevant to the article, general enough to be helpful to other students, concise, and well-written! :) If your comment was not approved, it likely did not adhere to these guidelines. If you are a Premium Clemmonsdogpark student and would like more personalized service, you can use the Help tab on the Clemmonsdogpark dashboard. Thanks!
|
Formula of Algebra
Formula of Algebra: Algebra is a branch of mathematics that deals with number theory, geometry, and analysis. The definition of algebra sometimes states that the study of the mathematical symbols and the rules involves manipulating these mathematical symbols. Algebra includes almost everything right from solving elementary equations to the study of abstractions. Algebra is very important for all competitive exams to solve the maximum number of topics which includes the implications, multiplication, division, subtraction, and addition.
In this article, we are providing you the detailed information on the formula of Algebra, algebra expressions, algebraic identities, algebra calculator, and algebra in maths. Read the detailed article for more information related to the formula of Algebra.
Types of Algebra
The algebra is divided into the following parts which are based on the complexity of algebra which is simplified by the use of numerous algebraic expressions. The branches of algebras are:
- Pre-algebra: It helps in transforming real-life problems into an algebraic expression in mathematics.
- Elementary Algebra: Elementary algebra deals with solving the algebraic expressions for a viable answer. In elementary algebra, simple variables like x, and y, are represented in the form of an equation.
- Abstract Algebra: Abstract algebra deals with the use of abstract concepts like groups, rings, and vectors rather than simple mathematical number systems.
- Universal Algebra: All the other mathematical forms involving trigonometry, calculus, and coordinate geometry involving algebraic expressions can be called universal algebra.
The algebra expression is an expression that is made up of variables and constants, along with algebraic operations (addition, subtraction, etc.). There are three types of algebraic expression i.e.
An algebraic expression which is having only one term is known as a monomial.
Examples of monomial expressions include 2x4, 4xy, 6x, 8y, etc.
A binomial expression is an algebraic expression which is having two terms, which are unlike.
Examples of binomial include 4xy + 9, Xyz + y3, etc.
An expression with more than one term with non-negative integral exponents of a variable is known as a polynomial.
Examples of polynomial expression include ax + by + ca, 2x3 + 3x + 6, etc.
Algebraic Identities are helpful in solving various expressions using these identities. These formulae involve squares and cubes of algebraic expressions and help in solving the algebraic expressions in some easy steps by using the below formulas.
- (a + b)2 = a2 + 2ab + b2
- (a – b)2 = a2 – 2ab + b2
- (a + b)(a – b) = a2 – b2
- (a + b + c)2 = a2 + b2 + c2 + 2ab + 2bc + 2ca
- (a + b)3 = a3 + 3a2b + 3ab2 + b3
- (a – b)3 = a3 – 3a2b + 3ab2 – b3
There are four algebraic operations that are mostly used for solving and algebraic expressions. Algebra Calculator mainly solves the following operations.
- Addition: In addition operation in algebra, two or more expressions are separated by a plus (+) sign between them.
- Subtraction: In subtraction operation in algebra, two or more expressions are separated by a minus (-) sign between them.
- Multiplication: In the multiplication operation in algebra, two or more expressions are separated by a multiplication (×) sign between them.
- Division: In division operation in algebra, two or more expressions are separated by a “/” sign between them.
Formula of Algebra: FAQ
Q. What are the 4 main operations of algebra?
Ans: The four main operations of algebra are addition, subtraction, multiplication, and division.
Q. What are the Different Branches or Types of Algebra?
Ans: There are five different branches or types of algebra which are elementary algebra, abstract algebra, advanced algebra, commutative algebra, and linear algebra.
|
DNA is the genetic information or blueprint of who and what we are, and how we operate. This genetic information is devoted to the synthesis of proteins, which are essential to our body. Proteins are created from templates of information called genes in our DNA.
– DNA molecules are long polymers of nucleotides
– the sequence the nucleotides is used to select specific amino acids
– the sequence of nucleotides will determine the types and sequence of amino acids in a protein
– DNA is found in the nucleus, because of their large size, DNA molecules are unable to leave the nucleus
– protein synthesis occurs in the cytoplasm, if a protein is to be synthesized the genetic information in the nucleus must be able to leave the nucleus
– a copy of a single gene called mRNA (messenger ribonucleic acid) is made
– mRNA is much smaller than a DNA molecule because it is only one gene in length
– because of it’s small size mRNA is able to take the genetic information out of the nucleus and into the cytoplasm
The genes on DNA molecule are transcribed (copied) into molecules of mRNA.
If the code of DNA looks like this : G-G-C-A-T-T
Then the mRNA would look like this: C-C-G-U-A-A (uracil replaces thymine)
With the genetic information responsible for creating substances now available on the mRNA strand, the mRNA moves out of the nucleus and away from the DNA towards the ribosomes.
mRNA and tRNA:
mRNA takes the genetic information to the ribosomes, the sites of protein synthesis. At the ribosome, the mRNA strand is translated by molecules of tRNA. tRNA molecules “read” (translate) the mRNA sequence, 3 nucleotides at a time. The code in a mRNA molecule are referred to as codons.
Each tRNA molecule contains 3 nucleotides, called an anticodon, which compliments the codon nucleotides on the mRNA strand. These in turn have the amino acid sequence to successfully code for a particular amino acid.
Attached to each tRNA molecule are specific amino acids. As they translate the mRNA sequence, the proper amino acids for a particular protein are being brought to the ribosome.
Amino acids are attached together at the ribosome to form a polypeptide chain which will go on to form a protein molecule.
Ribosomes and Rough Endoplasmic Reticulum (RER):
Ribosomes are the site of protein synthesis, they are found floating freely in the cytoplasm and on the outer surface of the endoplasmic reticulum (RER).
Proteins to be used by the cell are synthesized on the free floating ribosomes.
Proteins to be released by the cell (ex. Hormones, enzymes) are synthesized at the RER because the RER will transport proteins to the Golgi apparatus by way of a vesicle.
The Golgi Apparatus:
The Golgi apparatus processes polypetptide molecules to form proteins. The finished products are ‘pinched off’ the Golgi apparatus, and are transported by a vesicle to the cell membrane. When this vesicle reaches the cell membrane, it binds to a receptor on the surface and excretes the protein, where it can then undergo its function.
|
How Can Gravity Be Simulated In An Orbiting Space Station?
In space, there’s none of the gravity we are used to seeing on Earth. This means that people and objects are floating around in space. This lack of weight can cause confusion and may adversely affect our bodies for long periods of time. To combat this, certain space stations create artificial gravity by spinning around the outside of their station.
The station spins, and a centripetal force is generated that pushes objects and people toward the outside and toward the rim. This produces a feeling of gravity like what we feel on Earth. The fake gravity’s strength is contingent on the rotation speed and distance from the central station.
Overview of Current Space Stations
Space stations are human-made structures that revolve around the Earth and other stars that orbit in outer space. They are research labs for studying the effects of working and living in space, as well as a place for space exploration and research.
International Space Station (ISS)
The International Space Station (ISS) is a collaborative project of five space organizations: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada). It is the biggest artificial object made by humans, with a length of 10 meters and a weight of approximately 420,000 kg.
The ISS is a microgravity laboratory as well as a space-based research lab where astronauts from various countries perform experiments in various areas like biology and physics, astronomy, and human physiological research. It also functions as a place to launch space exploration, as it is a base for launching and maintaining spacecraft.
Tiangong Space Station The Tiangong Space Station can accommodate at least three space travelers simultaneously and serve as a laboratory for research in the life sciences, material sciences, and other areas. It will also be an ideal base for China’s upcoming manned space exploration missions, such as an unmanned mission to the Moon. Moon.
Mir Space Station
Even though it is no longer operational, the Mir Space Station was a significant step forward in human space exploration. It was created by the Soviet Union in 1986 and was operational until 2001, which makes it the longest-running space station in the history of mankind.
Mir Space Station Mir Space Station served as an experimental laboratory to study space travel’s effects on humans and as a launching point for space exploration. It was also the home of the first American astronaut. United States, Norman Thagard, who visited the station in.
The CSS will serve as a research platform for astronomy, space medicine, and biotechnology. It will also function as an underlying point for China’s future space missions that are manned to the Moon and other locations in space.
Russian Orbital Segment (ROS)
The Russian Orbital Segment (ROS) is part of the ISS, which is run by Russia. It comprises several modules launched by Russia and serves diverse purposes, like living areas, lab space, and storage.
The ROS is an operational base for Russian astronauts to conduct research and perform routine ISS maintenance. It is also the base to support Russia’s exploration endeavors and the creation of lunar bases.
Methods Of Simulating Gravity In Space
One of the most significant problems facing human space exploration is the lack of gravity. Gravity isn’t there, and it affects our bodies in various ways, such as bone and muscle loss, cardiovascular issues, and spatial disorientation. Thus, simulating gravity inside space can be essential for long-distance missions like an expedition to Mars.
The process of centrifugation involves making gravity appear by rotating the spacecraft or habitat around an axis central to it. The centrifugal force produced by the rotation creates the impression of gravity for the passengers in the spacecraft.
A centrifuge operating in space can be found in Europe’s Space Agency’s (ESA) Centrifuge Accommodating Mass Payloads for Terrestrial and Space Test (CAMPO) Facility. CAMPO is a centrifuge unit that simulates gravity forces that range between 0.1 and 20 times the gravity of Earth. It’s used to study the effects of gravity on different substances and human bodies.
Tethering is the process of simulating gravity by connecting two spacecraft or habitats using a cable or tether. The habitat or spacecraft at one end of the cable is at a different altitude than its counterpart, and the force of gravity between them causes tension on the cable, which is felt as gravity by inhabitants of the spacecraft and habitat.
One instance that illustrates tethering within space can be seen in the Tethered Satellite System (TSS), a joint venture by NASA and the Italian Space Agency. The TSS was a satellite that was connected by the Space Shuttle.
Magnet fields mimic gravity by exerting an electric force on magnetic materials. This force is manipulated to create various levels of gravity. It can be directed in various directions to give orientation clues to the spacecraft’s occupants or the environment.
One instance of magnetic fields used to mimic gravity would be the Maglev centrifuge. The Maglev centrifuge utilizes magnetic levitation to turn an object or habitat around a central axis, creating a gravity simulation.
The process of water immersion can be described as a technique for mimicking gravity by immersing people in the habitat of a spacecraft in water. The buoyancy of water produces a force perceived as gravity by people seated in it.
One instance where water immersion is being used to simulate gravity is in the Neutral Buoyancy Laboratory (NBL) located at NASA’s Johnson Space Center. The NBL is a huge swimming pool of water used to simulate the microgravity conditions of space. Astronauts train for spacewalks in the NBL in preparation for their journeys toward space. International Space Station.
The Coriolis Effect
The Coriolis effect is a phenomenon that happens because of the Earth’s rotational motion. It influences the movement of ocean currents, air masses, and other objects that travel over large distances across the Earth’s surface. Earth.
What Is The Coriolis Effect?
The coriolis effect is a visible deflection of moving objects like water or air caused by the rotor that occurs on the Earth. The phenomenon is named for French mathematics professor Gustave-Gaspard Coriolis, who introduced it in 1835.
The Coriolis effect is a phenomenon that occurs when the Coriolis result is due to the rotating of the Earth and makes objects appear to deflect to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. The effect is most prominent at the poles and least at the equator.
The Coriolis effect influences the movement of air masses, which cause them to move clockwise in the Northern Hemisphere and counterclockwise in the Southern Hemisphere. This type of rotation is known as the “Ferrel cell” and is the reason for the predominant westerly winds that prevail in the mid-latitudes.
The Coriolis effect can also trigger the development of cyclones and anticyclones. Within the Northern Hemisphere, cyclones rotate counterclockwise, whereas anticyclones move clockwise. When they are in the southern Cellsphere, the rotation is reversed.
What Is This Coriolis Effect Doing to Ocean Currents?
The Coriolis effect can also influence the motion of ocean circulation, which causes it to spin counterclockwise within the Northern Hemisphere and counterclockwise in the Southern Hemisphere. This type of rotation is known as an Ekman spiral. It is the reason for the movement of ocean waters.
The Coriolis influence also leads to the development of ocean gyres. They are huge circular currents moving throughout the oceans’ subtropical regions. For the Northern Hemisphere, gyres rotate clockwise. In the Southern Hemisphere, they rotate counterclockwise.
The Coriolis effect also influences the trajectory of projectiles like missiles or bullets. In the Northern Hemisphere, projectiles appear to deflect toward the right, whereas in the Southern Hemisphere, they appear to deflect towards the left. This phenomenon is to be considered by missile operators and marksmen when shooting at targets that are far away.
Designing A Space Station With Artificial Gravity
The design of a space station that incorporates artificial gravity is among the most difficult tasks involved in space exploration by humans. Artificial gravity is vital for the wellbeing of astronauts on long-term missions and for conducting experiments that require a space environment.
The most popular method of creating a simulation of gravity within space can be achieved by applying centrifugal force. The force generated by centrifugal force is created by turning a spacecraft around an axis central to it, creating a force perceived as gravity by people living in the spacecraft.
In order to design a space station with artificial gravity, the structure’s dimensions and speed of rotation should be carefully considered. The rotation speed should be enough to create a gravitational force suitable for the people who live there, but not so much that it triggers nausea or causes health problems.
The living space layout is an important factor to consider for the space station with artificial gravity. The environment must be planned to accommodate the centrifugal force and create a comfortable living space for the inhabitants.
One way to design a habitat is to utilize a rotating cylinder or ring. The inhabitants would be in the interior of the cylinder or ring, and that is where the centrifugal force is created. The outside of the cylinder or ring could be used to house solar panels, radiators, or any other type of equipment.
Life Support Systems
The life support system is essential to ensuring the wellbeing and health of inhabitants of space stations with artificial gravity. Therefore, the life support systems must be designed to function in a centrifugal setting and offer occupants a safe and comfortable living space.
Systems for life support need to include water and air recycling systems, as well as storage and production of food and disposal systems. These life-support mechanisms should be able to function for long periods without resupply. This is because resupply missions to a space station that uses artificial gravity are much more difficult than resupply missions to other space stations. International Space Station.
Training crew members is essential for an artificial space station’s gravity. Astronauts need to be taught how to work in a high-speed environment and adapt to the impact of artificial gravity on their bodies.
The training of crew members should also contain emergency plans for both the habitat and life support. Astronauts should be able to react quickly and efficiently to emergency situations that could arise in a spacecraft using artificial gravity.
Centrifugal Force Simulation
Centrifugal force simulation can be described as an instrument for simulating gravity in space by turning a spacecraft or a habitat around the central axis. The force produced through the rotation gives the impression of gravitational force to people seated on the craft.
The Science Of Centrifugal Force
The force of centrifugal force is felt by objects moving in a circular direction. The force is dispersed away from the center of rotation. It will be proportional to speed and the distance to that center point.
In the centrifugal force simulation, the spacecraft or habitat turned around a central axis, creating the impression of gravitational inhabitants in the orbital spacecraft. The force generated by centrifugal forces will be proportional to the rotation rate and radius of rotation.
Applications of Centrifugal Force Simulation
The simulation of force is centrifugal and has many uses in the space industry. One of the biggest applications is the creation of space habitats that are suitable for long-duration missions. Centrifugal force simulation could create a virtual gravity space comfortable for people seated in the spacecraft.
It is also employed in research on how gravity affects human bodies. The effects of spaceflight that last for long periods on human bodies greatly concern space exploration. The centrifugal force simulation is a method to investigate these effects in a controlled setting.
The Challenges Of Centrifugal Force Simulation
The simulation of centrifugal force is a complicated and difficult procedure. The rate of rotation and the radius must be chosen carefully to create a gravity simulation environment that is comfortable for those seated in the spacecraft. In addition, the layout of the spacecraft or the habitat also needs to be modified to consider the forces of gravity.
Future Of Centrifugal Force Simulation
Force simulations using centrifugal forces are a vital technology used in space exploration, and their development is connected with the development of manned spaceflight. While we explore the space environment and extend our presence within the solar system, centrifugal force simulation will likely play a growing role in creating an enjoyable and secure living space for astronauts.
Innovative technologies, like modern materials and propulsion technology, can allow us to create more efficient and effective centrifugal force simulation devices shortly. These developments will enable us to discover space more efficiently and expand our understanding of the universe.
Linear Acceleration Simulation
A linear acceleration simulator models gravity in space by moving a spacecraft in straight lines. The acceleration creates a force perceived as gravity by the spacecraft passengers.
Linear acceleration is a shift in velocity that occurs within straight lines. In the linear acceleration simulations, a habitat or spacecraft is accelerated along straight lines, creating an acceleration perceived as gravity by the passengers on the craft. This force will be proportional to the habitat’s or spacecraft’s acceleration and weight.
Applications Of Linear Acceleration Simulation
Linear acceleration simulation is a popular method for exploring the possibilities of space exploration. One of the biggest applications is the creation of space habitats that are suitable for long-term missions. Linear acceleration simulation is employed to create a virtual gravity-based system in such a way that it is directed along the direction of gravitational force, making it possible to maintain the direction of its motion. This can be achieved with the help of aligning the spacecraft satellite to ensure that its horizontal axis runs parallel to the direction of the gravitational force.
Applications Of Gravity-Gradient Stabilization
Gravity-gradient stabilization is a broad field of application in space research. One of the most significant applications is the stabilization of satellites for Earth observation. These satellites examine the Earth’s surface and atmospheric conditions, and it is vital to ensure they remain aligned correctly to collect accurate information.
The use of gravity-gradient stabilization also plays a role in stabilizing interplanetary spacecraft. Spacecraft have to maintain a stable inclination to be able to communicate with Earth as well as carry out scientific experiments.
The Challenges Of Gravity-Gradient Stabilization
Gravity-gradient stabilization can be described as an easy and non-invasive method for stabilizing the direction of a satellite or spacecraft; however, there are certain challenges to be resolved. One of the challenges is that this technique is only efficient if the spacecraft is in a low Earth orbit and the gravitational gradient force is sufficiently strong to ensure stability.
Another problem is that the technique isn’t suitable for satellites or spacecraft. For instance, satellites with unusual shapes or uniform mass distributions might not be able to take advantage of gravity-gradient stabilization.
Future Of Gravity-Gradient Stabilization
Gravity-gradient stabilization is an essential technology to explore space, and its future depends on the future of manned and unmanned space flight. As we expand our space exploration and increase our footprint within the solar system, gravity-gradient stabilization is expected to play a greater role in stabilizing spacecraft and satellites.
Innovative technologies, like sophisticated sensors and control and monitoring systems, can help increase the precision and effectiveness of stabilization by gravity in the near future. These developments will enable us to discover space more effectively and increase our understanding of the universe.
Human Adaptation To Artificial Gravity
Artificial gravity is an essential technology that can be used for long-duration spaceflight. Therefore, knowing how our bodies adapt to artificial gravity is crucial when designing space structures and spacecraft.
The Science Of Human Adaptation To Artificial Gravity
Artificial gravity is generated by modeling the gravitational force that can be felt on Earth. This is accomplished by different methods, like the linear acceleration simulator.
The human body is extremely flexible, and it can adapt to the effects of gravity that are created as time passes. The process of adaptation results from changes in the body’s musculoskeletal, cardiovascular, and sensory systems.
Cardiovascular adaptation is one of the major adaptations that take place inside the human body when subjected to the effects of artificial gravity. Unfortunately, in microgravity, the body’s cardiovascular system isn’t subject to the same degree of strain that it experiences on Earth, which could result in a decline in fitness for the cardiovascular system.
In artificial gravity, the cardiovascular system is exposed to Earth’s stress levels and can aid in keeping your cardiovascular fitness in check.
In microgravity, the body’s muscular system isn’t exposed to the same strain it experiences on Earth. This could lead to a decline in muscle and bone mass.
In artificial gravity-based environments, the muscles of the body are under a greater amount of stress. This helps to maintain muscle and bone mass.
Human bodies depend on many sensory systems to maintain balance and stay on Earth. However, in microgravity, these sensor systems aren’t subject to the same stimulation they receive on Earth and can cause confusion or motion sickness.
In artificial gravity-based environments, the body’s sensors are subject to higher stimulation. This may assist in reducing the chance of motion sickness and disorientation.
Challenges Of Human Adaptation To Artificial Gravity
One of the major issues with humans adapting to artificial gravity has been the possibility of motion sickness and other health problems. In addition, adjusting to artificial gravity could be uncomfortable and confusing, and those who are occupants of spacecraft could experience motion sickness if gravity is not properly monitored.
Another concern is the possibility of longer-term health effects. Unfortunately, the long-term consequences of exposure to artificial gravity aren’t yet completely understood, and further research is required in this field.
The Future of Human Adaptation to Artificial Gravity
Human adaptation to artificial gravity is an important area of study for space exploration, and its future is tightly tied to the future development of spaceflight manned by humans. While we explore the space environment and increase our presence within the solar system, artificial gravity will likely play a more significant role in establishing an enviable and secure space environment for astronauts.
The development of new technologies, including modern sensors and control systems, could allow us to enhance the accuracy and effectiveness of artificial gravity simulations in the near future. Furthermore, these advancements will allow us to discover space more efficiently and increase our knowledge of the universe.
Challenges And Limitations Of Simulating Gravity In Space
In space, simulated gravity is an important element of human space exploration. This is because gravity plays a crucial role in the human body’s functioning, and an extended absence of gravity could be detrimental to astronauts’ health. In recent times, a variety of techniques have been devised to mimic gravitational forces in space. But there are several issues and limitations with these techniques.
Inadequacy Of Understanding Of Gravity
The primary obstacle to modeling gravity in space is a lack of understanding of gravity. Gravity is an essential force in nature but is not yet fully comprehended. Knowledge of gravity is currently available and is built on the theories of general relativity that were developed by Albert Einstein. However, the concept is not from quantum mechanics, which describes how particles behave at the subatomic level. This is why there is an urgent need for a comprehensive theory that can explain the behavior of gravity at both the microscopic and macroscale.
The other challenge in the simulation of gravity from space concerns price. Making a device that can mimic gravity inside space is costly. For example, constructing the spacecraft to rotate to simulate gravity is costly and requires lots of resources. The International Space Station (ISS) is a prime instance of a spacecraft that mimics Earth’s rotation by spinning around it. However, the cost of building and maintaining the ISS, as well as its regular replenishment missions, is very high.
The third obstacle to simulating gravity within space comes from due to technical limitations. To simulate gravity in space, you need sophisticated technology that can withstand the extreme conditions of space. For example, the construction of a spacecraft with a rotating mechanism that can mimic Earth’s rotation calls for sophisticated engineering and materials that can stand up to the rotating strains. Furthermore, the technology used for simulating gravity is limited by the available resources, including electricity and space.
A fourth aspect of re-creating the effects of gravity from space concerns the health risk caused by prolonged contact with zero gravity. Human bodies are adapting to life on Earth, and gravity plays an important role in a myriad of bodily activities. However, prolonged exposure to zero gravity can have serious consequences for the wellbeing of astronauts, like bone and muscle loss, cardiovascular issues, and vision impairment. Therefore, it is crucial to devise effective ways for simulating gravity in space to reduce health risks.
The fifth issue with the simulation of gravity in space is the short duration of the simulation. The most efficient method of simulating gravity from space is to use spinning spacecraft. However, the time span of the gravity simulation is limited by the satellite’s available resources. For instance, the ISS can replicate gravity for a certain period, after which the spacecraft will need to replenish its resources.
How does a rotating space station simulate gravity?
Astronauts lie down on a short-radius centrifuge for a brief spin as part of the system, which also uses centrifugal acceleration to simulate a gravitational field of 1G, the same as that on Earth.
Is there simulated gravity on the space station?
Space stations in science fiction rotate to simulate gravity. Because low-gravity research is conducted on board the International Space Station, it doesn’t spin. The International Space Station is a unique laboratory for a reason that is clear: microgravity
Could a spinning space station simulate gravity?
The station may theoretically be set up to imitate Earth’s gravitational acceleration (9.81 m/s2), enabling extended human stays in space without the negative effects of microgravity.
How artificial gravity can be produced on board an orbiting space station?
A centripetal force can be used to produce artificial gravity. Any object moving in a circular path needs a centripetal force applied in the direction of the turn’s centre. The normal force generated by the hull of the spaceship acts as the centripetal force in the case of a spinning space station.
How big would a space station have to be to simulate gravity?
Astronauts may experience the same level of gravity as they would on Earth if a chamber on the space station rotated quickly enough. The space needed would only be about 2.6 metres (8.5 feet) across.
How do they simulate zero gravity for astronauts?
In order to replicate zero gravity, lunar gravity, and Martian gravity, gravity offload devices dump the weight of a person or piece of equipment using an overhead crane-like mechanism. Large apparatus called the Active Response Gravity Offload System (ARGOS) is housed at NASA’s Johnson Space Centre.
|
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
Nonlinear Systems of Equations and Problem-Solving
As with linear systems, a nonlinear system of equations (and conics) can be solved graphically and algebraically for all of its variables.
Solve nonlinear systems of equations graphically and algebraically
Subtracting one equation from another is an effective means for solving linear systems, but it often is difficult to use in nonlinear systems, in which the terms of two equations may be very different.
Substitution of a variable into another equation is usually the best method for solving nonlinear systems of equations.
Nonlinear systems of equations may have one or multiple solutions.
A conic section (or just conic) is a curve obtained as the intersection of a cone (more precisely, a right circular conical surface) with a plane. In analytic geometry, a conic may be defined as a plane algebraic curve of degree 2. There are a number of other geometric definitions possible. The four types of conic section are the hyperbola, the parabola, the ellipse, and the circle, though the circle can be considered to be a special case of the ellipse.
The type of a conic corresponds to its eccentricity. Conics with eccentricity less than $1$ are ellipses, conics with eccentricity equal to $1$ are parabolas, and conics with eccentricity greater than $1$ are hyperbolas. In the focus-directrix definition of a conic, the circle is a limiting case of the ellipse with an eccentricity of $0$. In modern geometry, certain degenerate cases, such as the union of two lines, are included as conics as well.
System of Equations
In a system of equations, two or more relationships are stated among variables. A system is solvable as long as there are as many simultaneous equations as variables. If each equation is graphed, the solution for the system can be found at the point where all the functions meet. The solution can be found either by inspection of a graph, typically by using graphing or plotting software, or algebraically.
Nonlinear systems of equations, such as conic sections, include at least one equation that is nonlinear. A nonlinear equation is defined as an equation possessing at least one term that is raised to a power of 2 or more. When graphed, these equations produce curved lines.
Since at least one function has curvature, it is possible for nonlinear systems of equations to contain multiple solutions. As with linear systems of equations, substitution can be used to solve nonlinear systems for one variable and then the other.
Solving nonlinear systems of equations algebraically is similar to doing the same for linear systems of equations. However, subtraction of one equation from another can become impractical if the two equations have different terms, which is more commonly the case in nonlinear systems.
Consider, for example, the following system of equations:
y &= x^2 \; \qquad (1) \\
y &= x + 6 \quad (2)
“Nonlinear Systems of Equations and Problem-Solving.”
Boundless, 27 Oct. 2016.
Retrieved 23 Feb. 2017 from
|
Evolutionary history of plants
The evolution of plants has resulted in widely varying levels of complexity, from the earliest algal mats, through bryophytes, lycopods, ferns to the complex gymnosperms and angiosperms of today. While many of the groups which appeared earlier continue to thrive, as exemplified by algal dominance in marine environments, more recently derived groups have also displaced previously ecologically dominant ones, e.g. the ascendance of flowering plants over gymnosperms in terrestrial environments.
In the Ordovician, around , the first land plants appeared. These began to diversify in the Late Silurian, around , and the results of their diversification are displayed in remarkable detail in an early Devonian fossil assemblage from the Rhynie chert. This chert preserved early plants in cellular detail, petrified in volcanic springs.
By the middle of the Devonian, most of the features recognised in plants today are present, including roots and leaves. Late Devonian free-sporing plants such as Archaeopteris had secondary vascular tissue that produced wood and had formed forests of tall trees. Also by late Devonian, Elkinsia, an early seed fern, had evolved seeds. Evolutionary innovation continued into the Carboniferous and is still ongoing today. Most plant groups were relatively unscathed by the Permo-Triassic extinction event, although the structures of communities changed. This may have set the scene for the appearance of the flowering plants in the Triassic (~ ), and their later diversification in the Cretaceous and Paleogene. The latest major group of plants to evolve were the grasses, which became important in the mid-Paleogene, from around . The grasses, as well as many other groups, evolved new mechanisms of metabolism to survive the low CO2 and warm, dry conditions of the tropics over the last .
- 1 Colonization of land
- 2 Evolution of life cycles
- 3 Evolution of morphology
- 3.1 Xylem
- 3.2 Leaves
- 3.3 Tree form
- 3.4 Roots
- 3.5 Seeds
- 3.6 Flowers
- 3.7 Factors influencing floral diversity
- 4 Evolution of photosynthetic pathways
- 5 Evolution of secondary metabolism
- 6 Mechanisms and players in evolution of plant form
- 7 Coevolution of Plants and Fungal Parasites
- 8 See also
- 9 References
Colonization of land
Land plants evolved from a group of green algae, perhaps as early as ; some molecular estimates place their origin even earlier, as much as . Their closest living relatives are the charophytes, specifically Charales; assuming that the Charales' habit has changed little since the divergence of lineages, this means that the land plants evolved from a branched, filamentous alga dwelling in shallow fresh water, perhaps at the edge of seasonally desiccating pools. The alga would have had a haplontic life cycle: it would only very briefly have had paired chromosomes (the diploid condition) when the egg and sperm first fused to form a zygote; this would have immediately divided by meiosis to produce cells with half the number of unpaired chromosomes (the haploid condition). Co-operative interactions with fungi may have helped early plants adapt to the stresses of the terrestrial realm.
Plants were not the first photosynthesisers on land; weathering rates suggest that photosynthetic organisms were already living on the land and microbial fossils have been found in freshwater lake deposits from , but the carbon isotope record suggests that they were too scarce to impact the atmospheric composition until around . These organisms, although phylogenetically diverse, were probably small and simple, forming little more than an "algal scum".,
The first evidence of plants on land comes from spores of mid-Ordovician age (early Llanvirn, ~ ). These spores, known as cryptospores, were produced either singly (monads), in pairs (dyads) or groups of four (tetrads), and their microstructure resembles that of modern liverwort spores, suggesting they share an equivalent grade of organisation. Their walls contain sporopollenin – further evidence of an embryophytic affinity. It could be that atmospheric 'poisoning' prevented eukaryotes from colonising the land prior to this, or it could simply have taken a great time for the necessary complexity to evolve.
Trilete spores similar to those of vascular plants appear soon afterwards, in Upper Ordovician rocks. Depending exactly when the tetrad splits, each of the four spores may bear a "trilete mark", a Y-shape, reflecting the points at which each cell squashed up against its neighbours. However, this requires that the spore walls be sturdy and resistant at an early stage. This resistance is closely associated with having a desiccation-resistant outer wall—a trait only of use when spores must survive out of water. Indeed, even those embryophytes that have returned to the water lack a resistant wall, thus don't bear trilete marks. A close examination of algal spores shows that none have trilete spores, either because their walls are not resistant enough, or in those rare cases where it is, the spores disperse before they are squashed enough to develop the mark, or don't fit into a tetrahedral tetrad.
The earliest megafossils of land plants were thalloid organisms, which dwelt in fluvial wetlands and are found to have covered most of an early Silurian flood plain. They could only survive when the land was waterlogged. There were also microbial mats.
Once plants had reached the land, there were two approaches to dealing with desiccation. The bryophytes avoid it or give in to it, restricting their ranges to moist settings, or drying out and putting their metabolism "on hold" until more water arrives. Tracheophytes resist desiccation: They all bear a waterproof outer cuticle layer wherever they are exposed to air (as do some bryophytes), to reduce water loss, but—since a total covering would cut them off from CO2 in the atmosphere—they rapidly evolved stomata,[clarification needed] small openings to allow, and control the rate of, gas exchange. Tracheophytes also developed vascular tissue to aid in the movement of water within the organisms (see below), and moved away from a gametophyte dominated life cycle (see below). Vascular tissue also facilitated upright growth without the support of water and paved the way for the evolution of larger plants on land.
The establishment of a land-based flora caused increased accumulation of oxygen in the atmosphere, as the plants produced oxygen as a waste product. When this concentration rose above 13%, wildfires became possible. This is first recorded in the early Silurian fossil record by charcoalified plant fossils. Apart from a controversial gap in the Late Devonian, charcoal is present ever since.
Charcoalification is an important taphonomic mode. Wildfire drives off the volatile compounds, leaving only a shell of pure carbon. This is not a viable food source for herbivores or detritovores, so is prone to preservation; it is also robust, so can withstand pressure and display exquisite, sometimes sub-cellular, detail.
Evolution of life cycles
All multicellular plants have a life cycle comprising two generations or phases. One is termed the gametophyte, has a single set of chromosomes (denoted 1N), and produces gametes (sperm and eggs). The other is termed the sporophyte, has paired chromosomes (denoted 2N), and produces spores. The gametophyte and sporophyte may appear identical – homomorphy – or may be very different – heteromorphy.
The pattern in plant evolution has been a shift from homomorphy to heteromorphy. The algal ancestors of land plants were almost certainly haplobiontic, being haploid for all their life cycles, with a unicellular zygote providing the 2N stage. All land plants (i.e. embryophytes) are diplobiontic – that is, both the haploid and diploid stages are multicellular. Two trends are apparent: bryophytes (liverworts, mosses and hornworts) have developed the gametophyte, with the sporophyte becoming almost entirely dependent on it; vascular plants have developed the sporophyte, with the gametophyte being particularly reduced in the seed plants.
It has been proposed that the basis for the emergence of the diploid phase of the life cycle as the dominant phase, is that diploidy allows masking of the expression of deleterious mutations through genetic complementation. Thus if one of the parental genomes in the diploid cells contains mutations leading to defects in one or more gene products, these deficiencies could be compensated for by the other parental genome (which nevertheless may have its own defects in other genes). As the diploid phase was becoming predominant, the masking effect likely allowed genome size, and hence information content, to increase without the constraint of having to improve accuracy of replication. The opportunity to increase information content at low cost is advantageous because it permits new adaptations to be encoded. This view has been challenged, with evidence showing that selection is no more effective in the haploid than in the diploid phases of the lifecycle of mosses and angiosperms.
There are two competing theories to explain the appearance of a diplobiontic lifecycle.
The interpolation theory (also known as the antithetic or intercalary theory) holds that the sporophyte phase was a fundamentally new invention, caused by the mitotic division of a freshly germinated zygote, continuing until meiosis produces spores. This theory implies that the first sporophytes bore a very different morphology to the gametophyte they depended on. This seems to fit well with what is known of the bryophytes, in which a vegetative thalloid gametophyte is parasitised by simple sporophytes, which often comprise no more than a sporangium on a stalk. Increasing complexity of the ancestrally simple sporophyte, including the eventual acquisition of photosynthetic cells, would free it from its dependence on a gametophyte, as seen in some hornworts (Anthoceros), and eventually result in the sporophyte developing organs and vascular tissue, and becoming the dominant phase, as in the tracheophytes (vascular plants). This theory may be supported by observations that smaller Cooksonia individuals must have been supported by a gametophyte generation. The observed appearance of larger axial sizes, with room for photosynthetic tissue and thus self-sustainability, provides a possible route for the development of a self-sufficient sporophyte phase.
The alternative hypothesis is termed the transformation theory (or homologous theory). This posits that the sporophyte appeared suddenly by a delay in the occurrence of meiosis after the zygote germinated. Since the same genetic material would be employed, the haploid and diploid phases would look the same. This explains the behaviour of some algae, which produce alternating phases of identical sporophytes and gametophytes. Subsequent adaption to the desiccating land environment, which makes sexual reproduction difficult, would result in the simplification of the sexually active gametophyte, and elaboration of the sporophyte phase to better disperse the waterproof spores. The tissue of sporophytes and gametophytes preserved in the Rhynie chert is of similar complexity, which is taken to support this hypothesis.
Evolution of morphology
To photosynthesise, plants must absorb CO2 from the atmosphere. However, this comes at a price: while stomata are open to allow CO2 to enter, water can evaporate. Water is lost much faster than CO2 is absorbed, so plants need to replace it, and have developed systems to transport water from the moist soil to the site of photosynthesis. Early plants sucked water between the walls of their cells, then evolved the ability to control water loss (and CO2 acquisition) through the use of a waterproof cuticle perforated by stomata. Specialised water transport tissues soon evolved in the form of hydroids, tracheids, then secondary xylem, followed by an endodermis and ultimately vessels.
The high CO2 levels of Silurian-Devonian times, when plants were first colonising land, meant that the need for water was relatively low. As CO2 was withdrawn from the atmosphere by plants, more water was lost in its capture, and more elegant transport mechanisms evolved. As water transport mechanisms, and waterproof cuticles, evolved, plants could survive without being continually covered by a film of water. This transition from poikilohydry to homoiohydry opened up new potential for colonisation. Plants then needed a robust internal structure that contained long narrow channels for transporting water from the soil to all the different parts of the above-soil plant, especially to the parts where photosynthesis occurred.
During the Silurian, CO2 was readily available, so little water needed to be expended to acquire it. By the end of the Carboniferous, when CO2 levels had lowered to something approaching today's, around 17 times more water was lost per unit of CO2 uptake. However, even in these "easy" early days, water was at a premium, and had to be transported to parts of the plant from the wet soil to avoid desiccation. This early water transport took advantage of the cohesion-tension mechanism inherent in water. Water has a tendency to diffuse to areas that are drier, and this process is accelerated when water can be wicked along a fabric with small spaces. In small passages, such as that between the plant cell walls (or in tracheids), a column of water behaves like rubber – when molecules evaporate from one end, they literally pull the molecules behind them along the channels. Therefore, transpiration alone provided the driving force for water transport in early plants. However, without dedicated transport vessels, the cohesion-tension mechanism cannot transport water more than about 2 cm, severely limiting the size of the earliest plants. This process demands a steady supply of water from one end, to maintain the chains; to avoid exhausing it, plants developed a waterproof cuticle. Early cuticle may not have had pores but did not cover the entire plant surface, so that gas exchange could continue. However, dehydration at times was inevitable; early plants cope with this by having a lot of water stored between their cell walls, and when it comes to it sticking out the tough times by putting life "on hold" until more water is supplied.
To be free from the constraints of small size and constant moisture that the parenchymatic transport system inflicted, plants needed a more efficient water transport system. During the early Silurian, they developed specialized cells, which were lignified (or bore similar chemical compounds) to avoid implosion; this process coincided with cell death, allowing their innards to be emptied and water to be passed through them. These wider, dead, empty cells were a million times more conductive than the inter-cell method, giving the potential for transport over longer distances, and higher CO2 diffusion rates.
The earliest macrofossils to bear water-transport tubes are Silurian plants placed in the genus Cooksonia. The early Devonian pretracheophytes Aglaophyton and Horneophyton have structures very similar to the hydroids of modern mosses.
Plants continued to innovate new ways of reducing the resistance to flow within their cells, thereby increasing the efficiency of their water transport. Thickened bands on the walls of tubes are apparent from the early Silurian onwards are adaptations to ease the flow of water. Banded tubes, as well as tubes with pitted ornamentation on their walls, were lignified and, when they form single celled conduits, are referred to as tracheids. These, the "next generation" of transport cell design, have a more rigid structure than hydroids, preventing their collapse at higher levels of water tension. Tracheids may have a single evolutionary origin, possibly within the hornworts, uniting all tracheophytes (but they may have evolved more than once).
Water transport requires regulation, and dynamic control is provided by stomata. By adjusting the amount of gas exchange, they can restrict the amount of water lost through transpiration. This is an important role where water supply is not constant, and indeed stomata appear to have evolved before tracheids, being present in the non-vascular hornworts.
An endodermis probably evolved during the Silu-Devonian, but the first fossil evidence for such a structure is Carboniferous. This structure in the roots covers the water transport tissue and regulates ion exchange (and prevents unwanted pathogens etc. from entering the water transport system). The endodermis can also provide an upwards pressure, forcing water out of the roots when transpiration is not enough of a driver.
Once plants had evolved this level of controlled water transport, they were truly homoiohydric, able to extract water from their environment through root-like organs rather than relying on a film of surface moisture, enabling them to grow to much greater size. As a result of their independence from their surroundings, they lost their ability to survive desiccation – a costly trait to retain.
During the Devonian, maximum xylem diameter increased with time, with the minimum diameter remaining pretty constant. By the Middle Devonian, the tracheid diameter of some plant lineages had plateaued. Wider tracheids allow water to be transported faster, but the overall transport rate depends also on the overall cross-sectional area of the xylem bundle itself. The increase in vascular bundle thickness further seems to correlate with the width of plant axes, and plant height; it is also closely related to the appearance of leaves and increased stomatal density, both of which would increase the demand for water.
While wider tracheids with robust walls make it possible to achieve higher water transport pressures, this increases the problem of cavitation. Cavitation occurs when a bubble of air forms within a vessel, breaking the bonds between chains of water molecules and preventing them from pulling more water up with their cohesive tension. A tracheid, once cavitated, cannot have its embolism removed and return to service (except in a few advanced angiosperms[verification needed] that have developed a mechanism of doing so). Therefore, it is well worth plants' while to avoid cavitation occurring. For this reason, pits in tracheid walls have very small diameters, to prevent air entering and allowing bubbles to nucleate. Freeze-thaw cycles are a major cause of cavitation. Damage to a tracheid's wall almost inevitably leads to air leaking in and cavitation, hence the importance of many tracheids working in parallel.
Ultimately, however, some cavitation incidents will occur, so plants have evolved a range of mechanisms to contain the damage. Small pits link adjacent conduits to allow fluid to flow between them, but not air – although ironically these pits, which prevent the spread of embolisms, are also a major cause of them. These pitted surfaces further reduce the flow of water through the xylem by as much as 30%. Conifers, by the Jurassic, developed an ingenious improvement, using valve-like structures to isolate cavitated elements. These torus-margo structures have a blob floating in the middle of a donut; when one side depressurises the blob is sucked into the torus and blocks further flow. Other plants simply accept cavitation; for instance, oaks grow a ring of wide vessels at the start of each spring, none of which survive the winter frosts. Maples use root pressure each spring to force sap upwards from the roots, squeezing out any air bubbles.
Growing to height also employed another trait of tracheids – the support offered by their lignified walls. Defunct tracheids were retained to form a strong, woody stem, produced in most instances by a secondary xylem. However, in early plants, tracheids were too mechanically vulnerable, and retained a central position, with a layer of tough sclerenchyma on the outer rim of the stems. Even when tracheids do take a structural role, they are supported by sclerenchymatic tissue.
Tracheids end with walls, which impose a great deal of resistance on flow; vessel members have perforated end walls, and are arranged in series to operate as if they were one continuous vessel. The function of end walls, which were the default state in the Devonian, was probably to avoid embolisms. An embolism is where an air bubble is created in a tracheid. This may happen as a result of freezing, or by gases dissolving out of solution. Once an embolism is formed, it usually cannot be removed (but see later); the affected cell cannot pull water up, and is rendered useless.
End walls excluded, the tracheids of prevascular plants were able to operate under the same hydraulic conductivity as those of the first vascular plant, Cooksonia.
The size of tracheids is limited as they comprise a single cell; this limits their length, which in turn limits their maximum useful diameter to 80 μm. Conductivity grows with the fourth power of diameter, so increased diameter has huge rewards; vessel elements, consisting of a number of cells, joined at their ends, overcame this limit and allowed larger tubes to form, reaching diameters of up to 500 μm, and lengths of up to 10 m.
Vessels first evolved during the dry, low CO2 periods of the Late Permian, in the horsetails, ferns and Selaginellales independently, and later appeared in the mid Cretaceous in angiosperms and gnetophytes. Vessels allow the same cross-sectional area of wood to transport around a hundred times more water than tracheids! This allowed plants to fill more of their stems with structural fibres, and also opened a new niche to vines, which could transport water without being as thick as the tree they grew on. Despite these advantages, tracheid-based wood is a lot lighter, thus cheaper to make, as vessels need to be much more reinforced to avoid cavitation.
Leaves today are, in almost all instances, an adaptation to increase the amount of sunlight that can be captured for photosynthesis. Leaves certainly evolved more than once, and probably originated as spiny outgrowths to protect early plants from herbivory.
Leaves are the primary photosynthetic organs of a plant. Based on their structure, they are classified into two types: microphylls, which lack complex venation patterns, and megaphylls, which are large and have complex venation. It has been proposed that these structures arose independently. Megaphylls, according to Walter Zimmerman's telome theory, have evolved from plants that showed a three-dimensional branching architecture, through three transformations—overtopping, which led to the lateral position typical of leaves, plantation, which involved formation of a planar architecture, webbing or fusion, which united the planar branches, thus leading to the formation of a proper leaf lamina. All three steps happened multiple times in the evolution of today's leaves.
It is widely believed that the telome theory is well supported by fossil evidence. However, Wolfgang Hagemann questioned it for morphological and ecological reasons and proposed an alternative theory. Whereas according to the telome theory the most primitive land plants have a three-dimensional branching system of radially symmetrical axes (telomes), according to Hagemann's alternative the opposite is proposed: the most primitive land plants that gave rise to vascular plants were flat, thalloid, leaf-like, without axes, somewhat like a liverwort or fern prothallus. Axes such as stems and roots evolved later as new organs. Rolf Sattler proposed an overarching process-oriented view that leaves some limited room for both the telome theory and Hagemann's alternative and in addition takes into consideration the whole continuum between dorsiventral (flat) and radial (cylindrical) structures that can be found in fossil and living land plants. This view is supported by research in molecular genetics. Thus, James (2009) concluded that "it is now widely accepted that... radiality [characteristic of axes such as stems] and dorsiventrality [characteristic of leaves] are but extremes of a continuous spectrum. In fact, it is simply the timing of the KNOX gene expression!"
From the point of view of the telome theory, it has been proposed that before the evolution of leaves, plants had the photosynthetic apparatus on the stems. Today's megaphyll leaves probably became commonplace some 360mya, about 40my after the simple leafless plants had colonized the land in the Early Devonian. This spread has been linked to the fall in the atmospheric carbon dioxide concentrations in the Late Paleozoic era associated with a rise in density of stomata on leaf surface. This must have allowed for better transpiration rates and gas exchange. Large leaves with less stomata would have gotten heated up in the sun's heat, but an increased stomatal density allowed for a better-cooled leaf, thus making its spread feasible.
The rhyniophytes of the Rhynie chert comprised nothing more than slender, unornamented axes. The early to middle Devonian trimerophytes may be considered leafy. This group of vascular plants are recognisable by their masses of terminal sporangia, which adorn the ends of axes which may bifurcate or trifurcate. Some organisms, such as Psilophyton, bore enations. These are small, spiny outgrowths of the stem, lacking their own vascular supply.
Around the same time, the zosterophyllophytes were becoming important. This group is recognisable by their kidney-shaped sporangia, which grew on short lateral branches close to the main axes. They sometimes branched in a distinctive H-shape. The majority of this group bore pronounced spines on their axes. However, none of these had a vascular trace, and the first evidence of vascularised enations occurs in the Rhynie genus Asteroxylon. The spines of Asteroxylon had a primitive vascular supply – at the very least, leaf traces could be seen departing from the central protostele towards each individual "leaf". A fossil known as Baragwanathia appears in the fossil record slightly earlier, in the Late Silurian. In this organism, these leaf traces continue into the leaf to form their mid-vein. One theory, the "enation theory", holds that the leaves developed by outgrowths of the protostele connecting with existing enations, but it is also possible that microphylls evolved by a branching axis forming "webbing".
Asteroxylon and Baragwanathia are widely regarded as primitive lycopods. The lycopods are still extant today, familiar as the quillwort Isoetes and the club mosses. Lycopods bear distinctive microphylls – leaves with a single vascular trace. Microphylls could grow to some size – the Lepidodendrales boasted microphylls over a meter in length – but almost all just bear the one vascular bundle. (An exception is the branching Selaginella).
The more familiar leaves, megaphylls, are thought to have separate origins – indeed, they appeared four times independently, in the ferns, horsetails, progymnosperms, and seed plants. They appear to have originated from dichotomising branches, which first overlapped (or "overtopped") one another, and eventually developed "webbing" and evolved into gradually more leaf-like structures. So megaphylls, by this "teleome theory", are composed of a group of webbed branches – hence the "leaf gap" left where the leaf's vascular bundle leaves that of the main branch resembles two axes splitting. In each of the four groups to evolve megaphylls, their leaves first evolved during the Late Devonian to Early Carboniferous, diversifying rapidly until the designs settled down in the mid Carboniferous.
The cessation of further diversification can be attributed to developmental constraints, but why did it take so long for leaves to evolve in the first place? Plants had been on the land for at least 50 million years before megaphylls became significant. However, small, rare mesophylls are known from the early Devonian genus Eophyllophyton – so development could not have been a barrier to their appearance. The best explanation so far incorporates observations that atmospheric CO2 was declining rapidly during this time – falling by around 90% during the Devonian. This corresponded with an increase in stomatal density by 100 times. Stomata allow water to evaporate from leaves, which causes them to curve. It appears that the low stomatal density in the early Devonian meant that evaporation was limited, and leaves would overheat if they grew to any size. The stomatal density could not increase, as the primitive steles and limited root systems would not be able to supply water quickly enough to match the rate of transpiration.
Secondary evolution can also disguise the true evolutionary origin of some leaves. Some genera of ferns display complex leaves which are attached to the pseudostele by an outgrowth of the vascular bundle, leaving no leaf gap. Further, horsetail (Equisetum) leaves bear only a single vein, and appear for all the world to be microphyllous; however, both the fossil record and molecular evidence indicate that their forbears bore leaves with complex venation, and the current state is a result of secondary simplification.
Deciduous trees deal with another disadvantage to having leaves. The popular belief that plants shed their leaves when the days get too short is misguided; evergreens prospered in the Arctic circle during the most recent greenhouse earth. The generally accepted reason for shedding leaves during winter is to cope with the weather – the force of wind and weight of snow are much more comfortably weathered without leaves to increase surface area. Seasonal leaf loss has evolved independently several times and is exhibited in the ginkgoales, pinophyta and angiosperms. Leaf loss may also have arisen as a response to pressure from insects; it may have been less costly to lose leaves entirely during the winter or dry season than to continue investing resources in their repair.
Factors influencing leaf architectures
Various physical and physiological forces like light intensity, humidity, temperature, wind speeds etc. are thought to have influenced evolution of leaf shape and size. It is observed that high trees rarely have large leaves, owing to the obstruction they generate for winds. This obstruction can eventually lead to the tearing of leaves, if they are large. Similarly, trees that grow in temperate or taiga regions have pointed leaves, presumably to prevent nucleation of ice onto the leaf surface and reduce water loss due to transpiration. Herbivory, not only by large mammals, but also small insects has been implicated as a driving force in leaf evolution, an example being plants of the genus Aciphylla, that are commonly found in New Zealand. The now extinct Moas fed upon these plants, and it is seen that the leaves have spines on their bodies, which probably functioned to discourage the moas from feeding on them. Other members of Aciphylla, which did not co-exist with the moas, do not have these spines.
At the genetic level, developmental studies have shown that repression of the KNOX genes is required for initiation of the leaf primordium. This is brought about by ARP genes, which encode transcription factors. Genes of this type have been found in many plants studied till now, and the mechanism i.e. repression of KNOX genes in leaf primordia, seems to be quite conserved. Interestingly, expression of KNOX genes in leaves produces complex leaves. It is speculated that the ARP function arose quite early in vascular plant evolution, because members of the primitive group Lycophytes also have a functionally similar gene Other players that have a conserved role in defining leaf primordia are the phytohormone auxin, gibberelin and cytokinin.
One interesting feature of a plant is its phyllotaxy. The arrangement of leaves on the plant body is such that the plant can maximally harvest light under the given constraints, and hence, one might expect the trait to be genetically robust. However, it may not be so. In maize, a mutation in only one gene called ABPHYL (ABnormal PHYLlotaxy) was enough to change the phyllotaxy of the leaves. It implies that sometimes, mutational adjustment of a single locus on the genome is enough to generate diversity. The abphyl gene was later on shown to encode a cytokinin response regulator protein.
Once the leaf primordial cells are established from the SAM cells, the new axes for leaf growth are defined, one important (and more studied) among them being the abaxial-adaxial (lower-upper surface) axes. The genes involved in defining this, and the other axes seem to be more or less conserved among higher plants. Proteins of the HD-ZIPIII family have been implicated in defining the adaxial identity. These proteins deviate some cells in the leaf primordium from the default abaxial state, and make them adaxial. It is believed that, in early plants with leaves, the leaves just had one type of surface — the abaxial one. This is the underside of today's leaves. The definition of the adaxial identity occurred some 200 million years after the abaxial identity was established. One can thus imagine the early leaves as an intermediate stage in evolution of today's leaves, having just arisen from spiny stem-like outgrowths of their leafless ancestors, covered with stomata all over, and not optimized as much for light harvesting.
How the wide variety of plant leaf morphology is generated is a subject of intense research. Some common themes have emerged. One of the most significant is the involvement of KNOX genes in generating compound leaves, as in the tomato (see above). But, this again is not universal. For example, the pea uses a different mechanism for doing the same thing. Mutations in genes affecting leaf curvature can also change leaf form, by changing the leaf from flat, to a crinky shape, like the shape of cabbage leaves. There also exist different morphogen gradients in a developing leaf which define the leaf's axis. Changes in these morphogen gradients may also affect the leaf form. Another very important class of regulators of leaf development are the microRNAs, whose role in this process has just begun to be documented. The coming years should see a rapid development in comparative studies on leaf development, with many EST sequences involved in the process coming online.
The early Devonian landscape was devoid of vegetation taller than waist height. Without the evolution of a robust vascular system, taller heights could not be attained. There was, however, a constant evolutionary pressure to attain greater height. The most obvious advantage is the harvesting of more sunlight for photosynthesis – by overshadowing competitors – but a further advantage is present in spore distribution, as spores (and, later, seeds) can be blown greater distances if they start higher. This may be demonstrated by Prototaxites, thought to be a Late Silurian fungus reaching eight metres in height.
To attain arborescence, early plants had to develop woody tissue that provided support and water transport. The stele of plants undergoing "secondary growth" is surrounded by the vascular cambium, a ring of cells which produces more xylem (on the inside) and phloem (on the outside). Since xylem cells comprise dead, lignified tissue, subsequent rings of xylem are added to those already present, forming wood.
The first plants to develop this secondary growth, and a woody habit, were apparently the ferns, and as early as the Middle Devonian one species, Wattieza, had already reached heights of 8 m and a tree-like habit.
Other clades did not take long to develop a tree-like stature; the Late Devonian Archaeopteris, a precursor to gymnosperms which evolved from the trimerophytes, reached 30 m in height. These progymnosperms were the first plants to develop true wood, grown from a bifacial cambium, of which the first appearance is in the Middle Devonian Rellimia. True wood is only thought to have evolved once, giving rise to the concept of a "lignophyte" clade.
These Archaeopteris forests were soon supplemented by lycopods, in the form of lepidodendrales, which topped 50m in height and 2m across at the base. These lycopods rose to dominate Late Devonian and Carboniferous coal deposits. Lepidodendrales differ from modern trees in exhibiting determinate growth: after building up a reserve of nutrients at a lower height, the plants would "bolt" to a genetically determined height, branch at that level, spread their spores and die. They consisted of "cheap" wood to allow their rapid growth, with at least half of their stems comprising a pith-filled cavity. Their wood was also generated by a unifacial vascular cambium – it did not produce new phloem, meaning that the trunks could not grow wider over time.[verification needed]
The horsetail Calamites was next on the scene, appearing in the Carboniferous. Unlike the modern horsetail Equisetum, Calamites had a unifacial vascular cambium, allowing them to develop wood and grow to heights in excess of 10 m. They also branched multiple times.
While the form of early trees was similar to that of today's, the groups containing all modern trees had yet to evolve.
The dominant groups today are the gymnosperms, which include the coniferous trees, and the angiosperms, which contain all fruiting and flowering trees. It was long thought that the angiosperms arose from within the gymnosperms, but recent molecular evidence suggests that their living representatives form two distinct groups. The molecular data has yet to be fully reconciled with morphological data, but it is becoming accepted that the morphological support for paraphyly is not especially strong. This would lead to the conclusion that both groups arose from within the pteridosperms, probably as early as the Permian.
The angiosperms and their ancestors played a very small role until they diversified during the Cretaceous. They started out as small, damp-loving organisms in the understory, and have been diversifying ever since the mid[verification needed]-Cretaceous, to become the dominant member of non-boreal forests today.
|The roots (bottom image) of lepidodendrales are thought to be functionally equivalent to the stems (top), as the similar appearance of "leaf scars" and "root scars" on these specimens from different species demonstrates.|
Roots are important to plants for two main reasons: Firstly, they provide anchorage to the substrate; more importantly, they provide a source of water and nutrients from the soil. Roots allowed plants to grow taller and faster.
The onset of roots also had effects on a global scale. By disturbing the soil, and promoting its acidification (by taking up nutrients such as nitrate and phosphate[verification needed]), they enabled it to weather more deeply, promoting the draw-down of CO2 with huge implications for climate. These effects may have been so profound they led to a mass extinction.
But, how and when did roots evolve in the first place? While there are traces of root-like impressions in fossil soils in the Late Silurian, body fossils show the earliest plants to be devoid of roots. Many had tendrils that sprawled along or beneath the ground, with upright axes or thalli dotted here and there, and some even had non-photosynthetic subterranean branches which lacked stomata. The distinction between root and specialised branch is developmental; true roots follow a different developmental trajectory to stems. Further, roots differ in their branching pattern, and in possession of a root cap. So while Silu-Devonian plants such as Rhynia and Horneophyton possessed the physiological equivalent of roots, roots – defined as organs differentiated from stems – did not arrive until later. Unfortunately, roots are rarely preserved in the fossil record, and our understanding of their evolutionary origin is sparse.
Rhizoids – small structures performing the same role as roots, usually a cell in diameter – probably evolved very early, perhaps even before plants colonised the land; they are recognised in the Characeae, an algal sister group to land plants. That said, rhizoids probably evolved more than once; the rhizines of lichens, for example, perform a similar role. Even some animals (Lamellibrachia) have root-like structures!
More advanced structures are common in the Rhynie chert, and many other fossils of comparable early Devonian age bear structures that look like, and acted like, roots. The rhyniophytes bore fine rhizoids, and the trimerophytes and herbaceous lycopods of the chert bore root-like structure penetrating a few centimetres into the soil. However, none of these fossils display all the features borne by modern roots. Roots and root-like structures became increasingly more common and deeper penetrating during the Devonian, with lycopod trees forming roots around 20 cm long during the Eifelian and Givetian. These were joined by progymnosperms, which rooted up to about a metre deep, during the ensuing Frasnian stage. True gymnosperms and zygopterid ferns also formed shallow rooting systems during the Famennian.
The rhizomorphs of the lycopods provide a slightly different approach to rooting. They were equivalent to stems, with organs equivalent to leaves performing the role of rootlets. A similar construction is observed in the extant lycopod Isoetes, and this appears to be evidence that roots evolved independently at least twice, in the lycophytes and other plants, a proposition supported by studies showing that roots are initiated and their growth promoted by different mechanisms in lycophytes and euphyllophytes.
A vascular system is indispensable to rooted plants, as non-photosynthesising roots need a supply of sugars, and a vascular system is required to transport water and nutrients from the roots to the rest of the plant. These plants are little more advanced than their Silurian forbears, without a dedicated root system; however, the flat-lying axes can be clearly seen to have growths similar to the rhizoids of bryophytes today.
By the Middle to Late Devonian, most groups of plants had independently developed a rooting system of some nature. As roots became larger, they could support larger trees, and the soil was weathered to a greater depth. This deeper weathering had effects not only on the aforementioned drawdown of CO2, but also opened up new habitats for colonisation by fungi and animals.
Roots today have developed to the physical limits. They penetrate many[quantify] metres of soil to tap the water table.[verification needed] The narrowest roots are a mere 40 μm in diameter, and could not physically transport water if they were any narrower. The earliest fossil roots recovered, by contrast, narrowed from 3 mm to under 700 μm in diameter; of course, taphonomy is the ultimate control of what thickness can be seen.
The efficiency of many plants' roots is increased via a symbiotic relationship with a fungal partner. The most common are arbuscular mycorrhizae (AM), literally "tree-like fungal roots". These comprise fungi that invade some root cells, filling the cell membrane with their hyphae. They feed on the plant's sugars, but return nutrients generated or extracted from the soil (especially phosphate), to which the plant would otherwise have no access.
This symbiosis appears to have evolved early in plant history. AM are found in all plant groups, and 80% of extant vascular plants, suggesting an early ancestry; a "plant"-fungus symbiosis may even have been the step that enabled them to colonise the land,. Such fungi increase the productivity even of simple plants such as liverworts. Indeed, AM are abundant in the Rhynie chert; the association occurred even before there were true roots to colonise, and some have suggested that roots evolved to provide a more comfortable habitat for mycorrhizal fungi.
Early land plants reproduced in the fashion of ferns: spores germinated into small gametophytes, which produced sperm. These would swim across moist soils to find the female organs (archegonia) on the same or another gametophyte, where they would fuse with an ovule to produce an embryo, which would germinate into a sporophyte.
Heterosporic organisms, as their name suggests, bear spores of two sizes – microspores and megaspores. These would germinate to form microgametophytes and megagametophytes, respectively. This system paved the way for seeds: taken to the extreme, the megasporangia could bear only a single megaspore tetrad, and to complete the transition to true seeds, three of the megaspores in the original tetrad could be aborted, leaving one megaspore per megasporangium.
The transition to seeds continued with this megaspore being "boxed in" to its sporangium while it germinates. Then, the megagametophyte is contained within a waterproof integument, which forms the bulk of the seed. The microgametophyte – a pollen grain which has germinated from a microspore – is employed for dispersal, only releasing its desiccation-prone sperm when it reaches a receptive megagametophyte.
Lycopods go a fair way down the path to seeds without ever crossing the threshold. Fossil lycopod megaspores reaching 1 cm in diameter, and surrounded by vegetative tissue, are known – these even germinate into a megagametophyte in situ. However, they fall short of being seeds, since the nucellus, an inner spore-covering layer, does not completely enclose the spore. A very small slit remains, meaning that the seed is still exposed to the atmosphere. This has two consequences – firstly, it means it is not fully resistant to desiccation, and secondly, sperm do not have to "burrow" to access the archegonia of the megaspore.
A Middle Devonian precursor to seed plants from Belgium has been identified predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the multilobed integument. It is suspected that the extension was involved in anemophilous pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed. Runcaria has all of the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the ovule.
The first spermatophytes (literally: "seed plants") – that is, the first plants to bear true seeds – are called pteridosperms: literally, "seed ferns", so called because their foliage consisted of fern-like fronds, although they were not closely related to ferns. The oldest fossil evidence of seed plants is of Late Devonian age, and they appear to have evolved out of an earlier group known as the progymnosperms. These early seed plants ranged from trees to small, rambling shrubs; like most early progymnosperms, they were woody plants with fern-like foliage. They all bore ovules, but no cones, fruit or similar. While it is difficult to track the early evolution of seeds, the lineage of the seed ferns may be traced from the simple trimerophytes through homosporous Aneurophytes.
This seed model is shared by basically all gymnosperms (literally: "naked seeds"), most of which encase their seeds in a woody or fleshy (the yew, for example) cone, but none of which fully enclose their seeds. The angiosperms ("vessel seeds") are the only group to fully enclose the seed, in a carpel.
Fully enclosed seeds opened up a new pathway for plants to follow: that of seed dormancy. The embryo, completely isolated from the external atmosphere and hence protected from desiccation, could survive some years of drought before germinating. Gymnosperm seeds from the Late Carboniferous have been found to contain embryos, suggesting a lengthy gap between fertilisation and germination. This period is associated with the entry into a greenhouse earth period, with an associated increase in aridity. This suggests that dormancy arose as a response to drier climatic conditions, where it became advantageous to wait for a moist period before germinating. This evolutionary breakthrough appears to have opened a floodgate: previously inhospitable areas, such as dry mountain slopes, could now be tolerated, and were soon covered by trees.
Seeds offered further advantages to their bearers: they increased the success rate of fertilised gametophytes, and because a nutrient store could be "packaged" in with the embryo, the seeds could germinate rapidly in inhospitable environments, reaching a size where it could fend for itself more quickly. For example, without an endosperm, seedlings growing in arid environments would not have the reserves to grow roots deep enough to reach the water table before they expired from dehydration. Likewise, seeds germinating in a gloomy understory require an additional reserve of energy to quickly grow high enough to capture sufficient light for self-sustenance. A combination of these advantages gave seed plants the ecological edge over the previously dominant genus Archaeopteris, thus increasing the biodiversity of early forests.
Despite these advantages, it is common for fertilized ovules to fail to mature as seeds. Also during seed dormancy (often associated with unpredictable and stressful conditions) DNA damages accumulate. Thus DNA damage appears to be a basic problem for survival of seed plants, just as DNA damages are a major problem for life in general. (Bernstein and Bernstein, 1991).
Flowers are modified leaves possessed only by the angiosperms, which are relatively late to appear in the fossil record; the group originated and diversified during the Early Cretaceous and became ecologically significant thereafter. Flower-like structures first appear in the fossil records some ~130 mya, in the Cretaceous. Colourful and/or pungent structures surround the cones of plants such as cycads and gnetales, making a strict definition of the term "flower" elusive.
The main function of a flower is reproduction, which, before the evolution of the flower and angiosperms, was the job of microsporophylls and megasporophylls. A flower can be considered a powerful evolutionary innovation, because its presence allowed the plant world to access new means and mechanisms for reproduction.
The flowering plants have long been assumed to have evolved from within the gymnosperms; according to the traditional morphological view, they are closely allied to the gnetales. However, as noted above, recent molecular evidence is at odds to this hypothesis, and further suggests that gnetales are more closely related to some gymnosperm groups than angiosperms, and that extant gymnosperms form a distinct clade to the angiosperms, the two clades diverging some .
The relationship of stem groups to the angiosperms is important in determining the evolution of flowers. stem groups provide an insight into the state of earlier "forks" on the path to the current state. Convergence increases the risk of misidentifying stem groups. Since the protection of the megagametophyte is evolutionarily desirable, probably many separate groups evolved protective encasements independently. In flowers, this protection takes the form of a carpel, evolved from a leaf and recruited into a protective role, shielding the ovules. These ovules are further protected by a double-walled integument.
Penetration of these protective layers needs something more than a free-floating microgametophyte. Angiosperms have pollen grains comprising just three cells. One cell is responsible for drilling down through the integuments, and creating a conduit for the two sperm cells to flow down. The megagametophyte has just seven cells; of these, one fuses with a sperm cell, forming the nucleus of the egg itself, and another joins with the other sperm, and dedicates itself to forming a nutrient-rich endosperm. The other cells take auxiliary roles.[clarification needed] This process of "double fertilisation" is unique and common to all angiosperms.
In the fossil record, there are three intriguing groups which bore flower-like structures. The first is the Permian pteridosperm Glossopteris, which already bore recurved leaves resembling carpels. The Triassic Caytonia is more flower-like still, with enclosed ovules – but only a single integument. Further, details of their pollen and stamens set them apart from true flowering plants.
The Bennettitales bore remarkably flower-like organs, protected by whorls of bracts which may have played a similar role to the petals and sepals of true flowers; however, these flower-like structures evolved independently, as the Bennettitales are more closely related to cycads and ginkgos than to the angiosperms.
However, no true flowers are found in any groups save those extant today. Most morphological and molecular analyses place Amborella, the nymphaeales and Austrobaileyaceae in a basal clade called "ANA". This clade appear to have diverged in the early Cretaceous, around – around the same time as the earliest fossil angiosperm, and just after the first angiosperm-like pollen, 136 million years ago. The magnoliids diverged soon after, and a rapid radiation had produced eudicots and monocots by . By the end of the Cretaceous , over 50% of today's angiosperm orders had evolved, and the clade accounted for 70% of global species. It was around this time that flowering trees became dominant over conifers
The features of the basal "ANA" groups suggest that angiosperms originated in dark, damp, frequently disturbed areas. It appears that the angiosperms remained constrained to such habitats throughout the Cretaceous – occupying the niche of small herbs early in the successional series. This may have restricted their initial significance, but given them the flexibility that accounted for the rapidity of their later diversifications in other habitats.
Phylogeny of anthophytes and gymnosperms, from
|Traditional view||Modern view|
Some propose that the Angiosperms arose from an unknown Seed Fern, Pteridophyte, and view Cycads as living Seed Ferns with both Seed-Bearing and sterile leaves (Cycas revoluta)
Origins of the flower
The family Amborellaceae is regarded as being the sister clade to all other living flowering plants. The complete genome of Amborella trichopoda is still being sequenced as of March 2012[update]. By comparing its genome with those of all other living flowering plants, it will be possible to work out the most likely characteristics of the ancestor of A. trichopoda and all other flowering plants, i.e. the ancestral flowering plant.
It seems that on the level of the organ, the leaf may be the ancestor of the flower, or at least some floral organs, The Flowering Leaf Theory. When some crucial genes involved in flower development are mutated, clusters of leaf-like structures arise in place of flowers. Thus, sometime in history, the developmental program leading to formation of a leaf must have been altered to generate a flower. There probably also exists an overall robust framework within which the floral diversity has been generated. An example of that is a gene called LEAFY (LFY), which is involved in flower development in Arabidopsis thaliana. The homologs of this gene are found in angiosperms as diverse as tomato, snapdragon, pea, maize and even gymnosperms. Expression of Arabidopsis thaliana LFY in distant plants like poplar and citrus also results in flower-production in these plants. The LFY gene regulates the expression of some genes belonging to the MADS-box family. These genes, in turn, act as direct controllers of flower development.
Evolution of the MADS-box family
The members of the MADS-box family of transcription factors play a very important and evolutionarily conserved role in flower development. According to the ABC Model of flower development, three zones — A,B and C — are generated within the developing flower primordium, by the action of some transcription factors, that are members of the MADS-box family. Among these, the functions of the B and C domain genes have been evolutionarily more conserved than the A domain gene. Many of these genes have arisen through gene duplications of ancestral members of this family. Quite a few of them show redundant functions.
The evolution of the MADS-box family has been extensively studied. These genes are present even in pteridophytes, but the spread and diversity is many times higher in angiosperms. There appears to be quite a bit of pattern into how this family has evolved. Consider the evolution of the C-region gene AGAMOUS (AG). It is expressed in today's flowers in the stamens, and the carpel, which are reproductive organs. Its ancestor in gymnosperms also has the same expression pattern. Here, it is expressed in the strobili, an organ that produces pollen or ovules. Similarly, the B-genes' (AP3 and PI) ancestors are expressed only in the male organs in gymnosperms. Their descendants in the modern angiosperms also are expressed only in the stamens, the male reproductive organ. Thus, the same, then-existing components were used by the plants in a novel manner to generate the first flower. This is a recurring pattern in evolution.
Factors influencing floral diversity
|Wikiversity has bloom time data for Linaria vulgaris on the Bloom Clock|
There is enormous variation in the developmental programs of plants. For example, grasses possess unique floral structures. The carpels and stamens are surrounded by scale-like lodicules and two bracts: the lemma and the palea. Genetic evidence and morphology suggest that lodicules are homologous to eudicot petals. The palea and lemma may be homologous to sepals in other groups, or may be unique grass structures. The genetic evidence is not clear.
Variation in floral structure is typically due to slight changes in the MADS-box genes and their expression pattern.
Arabidopsis thaliana has a gene called AGAMOUS that plays an important role in defining how many petals and sepals and other organs are generated. Mutations in this gene give rise to the floral meristem obtaining an indeterminate fate, and many floral organs keep on getting produced. Roses, carnations and morning glory, for example, that have very dense floral organs. These flowers have been selected by horticulturists for increased number of petals. Researchers have found that the morphology of these flowers is because of strong mutations in the AGAMOUS homolog in these plants, which leads to them making a large number of petals and sepals. Several studies on diverse plants like petunia, tomato, Impatiens, maize etc. have suggested that the enormous diversity of flowers is a result of small changes in genes controlling their development.
Some of these changes also cause changes in expression patterns of the developmental genes, resulting in different phenotypes. The Floral Genome Project looked at the EST data from various tissues of many flowering plants. The researchers confirmed that the ABC Model of flower development is not conserved across all angiosperms. Sometimes expression domains change, as in the case of many monocots, and also in some basal angiosperms like Amborella. Different models of flower development like the Fading boundaries model, or the Overlapping-boundaries model which propose non-rigid domains of expression, may explain these architectures. There is a possibility that from the basal to the modern angiosperms, the domains of floral architecture have gotten more and more fixed through evolution.
Another floral feature that has been a subject of natural selection is flowering time. Some plants flower early in their life cycle, others require a period of vernalization before flowering. This outcome is based on factors like temperature, light intensity, presence of pollinators and other environmental signals: genes like CONSTANS (CO), Flowering Locus C (FLC) and FRIGIDA regulate integration of environmental signals into the pathway for flower development. Variations in these loci have been associated with flowering time variations between plants. For example, Arabidopsis thaliana ecotypes that grow in the cold, temperate regions require prolonged vernalization before they flower, while the tropical varieties, and the most common lab strains, don't. This variation is due to mutations in the FLC and FRIGIDA genes, rendering them non-functional.
Quite a few players in this process are conserved across all the plants studied. Sometimes though, despite genetic conservation, the mechanism of action turns out to be different. For example, rice is a short-day plant, while Arabidopsis thaliana is a long-day plant. Now, in both plants, the proteins CO and FLOWERING LOCUS T (FT) are present. But, in Arabidopsis thaliana, CO enhances FT production, while in rice, the CO homolog represses FT production, resulting in completely opposite downstream effects.
Theories of flower evolution
There are many theories that propose how flowers evolved. Some of them are described below.
The Anthophyte Theory was based on the observation that a gymnospermic group Gnetales has a flower-like ovule. It has partially developed vessels as found in the angiosperms, and the megasporangium is covered by three envelopes, like the ovary structure of angiosperm flowers. However, many other lines of evidence show that Gnetales is not related to angiosperms.
The Mostly Male Theory has a more genetic basis. Proponents of this theory point out that the gymnosperms have two very similar copies of the gene LFY, while angiosperms just have one. Molecular clock analysis has shown that the other LFY paralog was lost in angiosperms around the same time as flower fossils become abundant, suggesting that this event might have led to floral evolution. According to this theory, loss of one of the LFY paralog led to flowers that were more male, with the ovules being expressed ectopically. These ovules initially performed the function of attracting pollinators, but sometime later, may have been integrated into the core flower.
Evolution of photosynthetic pathways
Photosynthesis is not quite as simple as adding water to CO2 to produce sugars and oxygen. A complex chemical pathway is involved, facilitated along the way by a range of enzymes and co-enzymes. The enzyme RuBisCO is responsible for "fixing" CO2 – that is, it attaches it to a carbon-based molecule to form a sugar, which can be used by the plant, releasing an oxygen molecule along the way. However, the enzyme is notoriously inefficient, and just as effectively will also fix oxygen instead of CO2 in a process called photorespiration. This is energetically costly as the plant has to use energy to turn the products of photorespiration back into a form that can react with CO2.
C4 plants evolved carbon concentrating mechanisms. These work by increasing the concentration of CO2 around RuBisCO, thereby facilitating photosynthesis and decreasing photorespiration. The process of concentrating CO2 around RuBisCO requires more energy than allowing gases to diffuse, but under certain conditions – i.e. warm temperatures (>25 °C), low CO2 concentrations, or high oxygen concentrations – pays off in terms of the decreased loss of sugars through photorespiration.
One type of C4 metabolism employs a so-called Kranz anatomy. This transports CO2 through an outer mesophyll layer, via a range of organic molecules, to the central bundle sheath cells, where the CO2 is released. In this way, CO2 is concentrated near the site of RuBisCO operation. Because RuBisCO is operating in an environment with much more CO2 than it otherwise would be, it performs more efficiently.
A second mechanism, CAM photosynthesis, temporally separates photosynthesis from the action of RuBisCO. RuBisCO only operates during the day, when stomata are sealed and CO2 is provided by the breakdown of the chemical malate. More CO2 is then harvested from the atmosphere when stomata open, during the cool, moist nights, reducing water loss.
These two pathways, with the same effect on RuBisCO, evolved a number of times independently – indeed, C4 alone arose 62 times in 18 different plant families. A number of 'pre-adaptations' seem to have paved the way for C4, leading to its clustering in certain clades: it has most frequently been innovated in plants that already had features such as extensive vascular bundle sheath tissue. Many potential evolutionary pathways resulting in the C4 phenotype are possible and have been characterised using Bayesian inference, confirming that non-photosynthetic adaptations often provide evolutionary stepping stones for the further evolution of C4.
The C4 construction is most famously used by a subset of grasses, while CAM is employed by many succulents and cacti. The trait appears to have emerged during the Oligocene, around ; however, they did not become ecologically significant until the Miocene, . Remarkably, some charcoalified fossils preserve tissue organised into the Kranz anatomy, with intact bundle sheath cells, allowing the presence C4 metabolism to be identified without doubt at this time. Isotopic markers are used to deduce their distribution and significance. C3 plants preferentially use the lighter of two isotopes of carbon in the atmosphere, 12C, which is more readily involved in the chemical pathways involved in its fixation. Because C4 metabolism involves a further chemical step, this effect is accentuated. Plant material can be analysed to deduce the ratio of the heavier 13C to 12C. This ratio is denoted δ13C. C3 plants are on average around 14‰ (parts per thousand) lighter than the atmospheric ratio, while C4 plants are about 28‰ lighter. The δ13C of CAM plants depends on the percentage of carbon fixed at night relative to what is fixed in the day, being closer to C3 plants if they fix most carbon in the day and closer to C4 plants if they fix all their carbon at night.
It's troublesome procuring original fossil material in sufficient quantity to analyse the grass itself, but fortunately there is a good proxy: horses. Horses were globally widespread in the period of interest, and browsed almost exclusively on grasses. There's an old phrase in isotope palæontology, "you are what you eat (plus a little bit)" – this refers to the fact that organisms reflect the isotopic composition of whatever they eat, plus a small adjustment factor. There is a good record of horse teeth throughout the globe, and their δ13C has been measured. The record shows a sharp negative inflection around , during the Messinian, and this is interpreted as the rise of C4 plants on a global scale.
When is C4 an advantage?
While C4 enhances the efficiency of RuBisCO, the concentration of carbon is highly energy intensive. This means that C4 plants only have an advantage over C3 organisms in certain conditions: namely, high temperatures and low rainfall. C4 plants also need high levels of sunlight to thrive. Models suggest that, without wildfires removing shade-casting trees and shrubs, there would be no space for C4 plants. But, wildfires have occurred for 400 million years – why did C4 take so long to arise, and then appear independently so many times? The Carboniferous (~ ) had notoriously high oxygen levels – almost enough to allow spontaneous combustion – and very low CO2, but there is no C4 isotopic signature to be found. And there doesn't seem to be a sudden trigger for the Miocene rise.
During the Miocene, the atmosphere and climate were relatively stable. If anything, CO2 increased gradually from This suggests that it did not have a key role in invoking C4 evolution. Grasses themselves (the group which would give rise to the most occurrences of C4) had probably been around for 60 million years or more, so had had plenty of time to evolve C4, which, in any case, is present in a diverse range of groups and thus evolved independently. There is a strong signal of climate change in South Asia; increasing aridity – hence increasing fire frequency and intensity – may have led to an increase in the importance of grasslands. However, this is difficult to reconcile with the North American record. It is possible that the signal is entirely biological, forced by the fire- (and elephant?)- driven acceleration of grass evolution – which, both by increasing weathering and incorporating more carbon into sediments, reduced atmospheric CO2 levels. Finally, there is evidence that the onset of C4 from is a biased signal, which only holds true for North America, from where most samples originate; emerging evidence suggests that grasslands evolved to a dominant state at least 15Ma earlier in South America.before settling down to concentrations similar to the Holocene.
Evolution of secondary metabolism
Secondary metabolites are essentially low molecular weight compounds, sometimes having complex structures. They function in processes as diverse as immunity, anti-herbivory, pollinator attraction, communication between plants, maintaining symbiotic associations with soil flora, enhancing the rate of fertilization etc., and hence are significant from the evo-devo perspective. The structural and functional diversity of these secondary metabolites across the plant kingdom is vast; it is estimated that hundreds of thousands of enzymes might be involved in this process in the entire of the plant kingdom, with about 15–25% of the genome coding for these enzymes, and every species having its unique arsenal of secondary metabolites. Many of these metabolites are of enormous medical significance to humans.
What is the purpose of having so many secondary metabolites being produced, with a significant chunk of the metabolome devoted to this activity? It is hypothesized that most of these chemicals help in generating immunity and, in consequence, the diversity of these metabolites is a result of a constant war between plants and their parasites. There is evidence that this may be true in many cases. The big question here is the reproductive cost involved in maintaining such an impressive inventory. Various models have been suggested that probe into this aspect of the question, but a consensus on the extent of the cost is lacking. We still cannot predict whether a plant with more secondary metabolites would be better off than other plants in its vicinity.
Secondary metabolite production seems to have arisen quite early during evolution. In plants, they seem to have spread out using mechanisms including gene duplications, evolution of novel genes etc. Furthermore, studies have shown that diversity in some of these compounds may be positively selected for.
Although the role of novel gene evolution in the evolution of secondary metabolism cannot be denied, there are several examples where new metabolites have been formed by small changes in the reaction. For example, cyanogen glycosides have been proposed to have evolved multiple times in different plant lineages. There are several such instances of convergent evolution. For example, enzymes for synthesis of limonene – a terpene – are more similar between angiosperms and gymnosperms than to their own terpene synthesis enzymes. This suggests independent evolution of the limonene biosynthetic pathway in these two lineages.
Mechanisms and players in evolution of plant form
While environmental factors are significantly responsible for evolutionary change, they act merely as agents for natural selection. Change is inherently brought about via phenomena at the genetic level - mutations, chromosomal rearrangements and epigenetic changes. While the general types of mutations hold true across the living world, in plants, some other mechanisms have been implicated as highly significant.
Genome doubling is a relatively common occurrence in plant evolution and results in polyploidy, which is consequently a common feature in plants. It is believed that at least half (and probably all) plants have seen genome doubling in their history. Genome doubling entails gene duplication, thus generating functional redundancy in most genes. The duplicated genes may attain new function, either by changes in expression pattern or changes in activity. Polyploidy and gene duplication are believed to be among the most powerful forces in evolution of plant form; though it is not known why genome doubling is such a frequent process in plants. One probable reason is the production of large amounts of secondary metabolites in plant cells. Some of them might interfere in the normal process of chromosomal segregation, causing genome duplication.
In recent times, plants have been shown to possess significant microRNA families, which are conserved across many plant lineages. In comparison to animals, while the number of plant miRNA families are lesser than animals, the size of each family is much larger. The miRNA genes are also much more spread out in the genome than those in animals, where they are more clustered. It has been proposed that these miRNA families have expanded by duplications of chromosomal regions. Many miRNA genes involved in regulation of plant development have been found to be quite conserved between plants studied.
Domestication of plants like maize, rice, barley, wheat etc. has also been a significant driving force in their evolution. Some studies have tried to look at the origins of the maize plant and it turns out that maize is a domesticated derivative of a wild plant from Mexico called teosinte. Teosinte belongs to the genus Zea, just as maize, but bears very small inflorescence, 5-10 hard cobs and a highly branched and spread out stem.
Interestingly, crosses between a particular teosinte variety and maize yields fertile offspring that are intermediate in phenotype between maize and teosinte. QTL analysis has also revealed some loci that, when mutated in maize, yield a teosinte-like stem or teosinte-like cobs. Molecular clock analysis of these genes estimates their origins to some 9,000 years ago, well in accordance with other records of maize domestication. It is believed that a small group of farmers must have selected some maize-like natural mutant of teosinte some 9,000 years ago in Mexico, and subjected it to continuous selection to yield the familiar maize plant of today.
Another interesting case is that of cauliflower. The edible cauliflower is a domesticated version of the wild plant Brassica oleracea, which does not possess the dense undifferentiated inflorescence, called the curd, that cauliflower possesses.
|Wikispecies has information related to: Brassicaceae|
Cauliflower possesses a single mutation in a gene called CAL, controlling meristem differentiation into inflorescence. This causes the cells at the floral meristem to gain an undifferentiated identity and, instead of growing into a flower, they grow into a lump of undifferentiated cells. This mutation has been selected through domestication since at least the Greek empire.
Coevolution of Plants and Fungal Parasites
An additional contributing factor in some plants leading to evolutionary change is the force due to coevolution with fungal parasites. In an environment with a fungal parasite, which is common in nature, the plants must make adaptation in an attempt to evade the harmful effects of the parasite.
Whenever a parasitic fungus is siphoning limited resources away from a plant, there is selective pressure for a phenotype that is better able to prevent parasitic attack from fungi. At the same time, fungi that are better equipped to evade the defenses of the plant will have greater fitness level. The combination of these two factors leads to an endless cycle of evolutionary change in the host-pathogen system.
Because each species in the relationship is influenced by a constantly changing symbiont, evolutionary change usually occurs at a faster pace than if the other species was not present. This is true of most instances of coevolution. This makes the ability of a population to quickly evolve vital to its survival. Also, if the pathogenic species is too successful and threatens the survival and reproductive success of the host plants, the pathogenic fungi risk losing their nutrient source for future generations. These factors create a dynamic that shapes the evolutionary changes in both species generation after generation.
Genes that code for defense mechanisms in plants must keep changing to keep up with the parasite that constantly works to evade the defenses. Genes that code for attachment mechanisms are the most dynamic and are directly related to the evading ability of the fungi. The greater the changes in these genes, the more change in the attachment mechanism. After selective forces on the resulting phenotypes, evolutionary change that promotes evasion of host defenses occurs.
Fungi not only evolve to avoid the defenses of the plants, but they also attempt to prevent the plant from enacting the mechanisms to improve its defenses. Anything the fungi can do to slow the evolution process of the host plants will improve the fitness of future generations because the plant will not be able to keep up with the evolutionary changes of the parasite. One of the main processes by which plants quickly evolve in response to the environment is sexual reproduction. Without sexual reproduction, advantageous traits could not be spread through the plant population as quickly allowing the fungi to gain a competitive advantage. For this reason, the sexual reproductive organs of plants are targets for attacks by fungi. Studies have shown that many different current types of obligate parasitic plant fungi have developed mechanisms to disable or otherwise affect the sexual reproduction of the plants. If successful, the sexual reproduction process slows for the plant, thus slowing down evolutionary change or in extreme cases, the fungi can render the plant sterile creating an advantage for the pathogens. It is unknown exactly how this adaptive trait developed in fungi, but it is clear that the relationship to the plant forced the development of the process.
Some researchers are also studying how a range of factors affect the rate of evolutionary change and the outcomes of change in different environments. For example, as with most evolution, increases in heritability in a population allow for a greater evolutionary response in the presence of selective pressure. For traits specific to the plant-fungi coevolution, researchers have studied how the virulence of the invading pathogen affects the coevolution. Studies involving Mycosphaerella graminicola have consistently showed that virulence of a pathogen does not have a significant impact on the evolutionary track of the host plant.
There can be other factors in that can affect the process of coevolution. For example, in small populations, selection is a relatively weaker force on the population due to genetic drift. Genetic drift increases the likelihood of having fixed alleles which decreases the genetic variance in the population. Therefore, if there is only a small population of plants in an area with the ability to reproduce together, genetic drift may counteract the effects of selection putting the plant in a disadvantageous position to fungi which can evolve at a normal rate. The variance in both the host and pathogen population is a major determinant of evolutionary success compared to the other species. The greater the genetic variance, the faster the species can evolve to counteract the other organism’s avoidance or defensive mechanisms.
Due to the process of pollination for plants, the effective population size is normally larger than for fungi because pollinators can link isolated populations in a way that the fungus is not able. This means positive traits that evolve in non-adjacent but close areas can be passed to nearby areas. Fungi must individually evolve to evade host defenses in each area. This is obviously a clear competitive advantage for the host plants. Sexual reproduction with a broad, high variance population leads to fast evolutionary change and higher reproductive success of offspring.
Environment and climate patterns also play a role in evolutionary outcomes. Studies with oak trees and an obligate fungal parasite at different altitudes clearly show this distinction. For the same species, different altitudinal positions had drastically different rates of evolution and changes in the response to the pathogens due to the organism also in a selective environment due to their surroundings.
Coevolution is a process that is related to the red queen hypothesis. Both the host plant and parasitic fungi have to continue to survive to stay in their ecological niche. If one of the two species in the relationship evolves at a significantly faster rate than the other, the slower species will be at a competitive disadvantage and risk the loss of nutrients. Because the two species in the system are so closely linked, they respond to external environment factors together and each species affects the evolutionary outcome of the other. In other words, each species exerts selective pressure on the other. Population size is also a major factor in the outcome because differences in gene flow and genetic drift could cause evolutionary changes that do not match the direction of selection expected by forces due to the other organism. Coevolution is an important phenomenon necessary for understanding the vital relationship between plants and their fungal parasites.
- General evolution
- Study of plants
- Plant interactions
- "The oldest fossils reveal evolution of non-vascular plants by the middle to late Ordovician Period (~450–440 m.y.a.) on the basis of fossil spores." Transition of plants to land
- Barton, Nicholas (2007). Evolution. pp. 273–274. ISBN 9780199226320. Retrieved September 30, 2012.
- Rothwell, G. W.; Scheckler, S. E.; Gillespie, W. H. (1989). "Elkinsia gen. nov., a Late Devonian gymnosperm with cupulate ovules". Botanical Gazette 150 (2): 170–189. doi:10.1086/337763.
- Raven, J.A.; Edwards, D. (2001). "Roots: evolutionary origins and biogeochemical significance". Journal of Experimental Botany (in active DOI due to publisher error (2008-04-30)) 52 (90001): 381–401. doi:10.1093/jexbot/52.suppl_1.381. PMID 11326045.
- Clarke, J. T.; Warnock, R. C. M.; Donoghue, P. C. J. (2011). "Establishing a time-scale for plant evolution". New Phytologist 192 (1): 266–301. doi:10.1111/j.1469-8137.2011.03794.x. PMID 21729086.
- P. Kenrick, P.R. Crane (1997) (1997). The origin and early diversification of land plants. A cladistic study. Smithsonian Institution Press, Washington & London. Washington: Smithsonian Inst. Press. ISBN 1-56098-729-4.
- Heckman, D. S.; Geiser, D. M.; Eidell, B. R.; Stauffer, R. L.; Kardos, N. L.; Hedges, S. B. (Aug 2001). "Molecular evidence for the early colonization of land by fungi and plants". Science 293 (5532): 1129–1133. doi:10.1126/science.1061457. ISSN 0036-8075. PMID 11498589.
- Battison, Leila; Brasier, Martin D. (August 2009). "Exceptional Preservation of Early Terrestrial Communities in Lacustrine Phosphate One Billion Years Ago" (PDF). In Smith, Martin R.; O'Brien, Lorna J.; Caron, Jean-Bernard. Abstract Volume. International Conference on the Cambrian Explosion (Walcott 2009). Toronto, Ontario, Canada: The Burgess Shale Consortium (published 31 July 2009). ISBN 978-0-9812885-1-2.
- Knauth, L. P.; Kennedy, M. J. (2009). "The late Precambrian greening of the Earth". Nature. Bibcode:2009Natur.460..728K. doi:10.1038/nature08213.
- Battistuzzi, F. U.; Feijao, A.; Hedges, S. B. (2004). "A genomic timescale of prokaryote evolution: insights into the origin of methanogenesis, phototrophy, and the colonization of land". BMC Evolutionary Biology 4: 44. doi:10.1186/1471-2148-4-44. PMC 533871. PMID 15535883.
- Gray, J.; Chaloner, W. G.; Westoll, T. S. (1985). "The Microfossil Record of Early Land Plants: Advances in Understanding of Early Terrestrialization, 1970-1984". Philosophical Transactions of the Royal Society B 309 (1138): 167–195. Bibcode:1985RSPTB.309..167G. doi:10.1098/rstb.1985.0077. JSTOR 2396358.
- Wellman, C. H.; Gray, J. (2000). "The microfossil record of early land plants". Philosophical Transactions of the Royal Society B: Biological Sciences 355 (1398): 717. doi:10.1098/rstb.2000.0612. PMC 1692785. PMID 10905606.
- Rubinstein, C. V.; Gerrienne, P.; De La Puente, G. S.; Astini, R. A.; Steemans, P. (2010). "Early Middle Ordovician evidence for land plants in Argentina (eastern Gondwana)". New Phytologist 188 (2): 365–369. doi:10.1111/j.1469-8137.2010.03433.x. PMID 20731783.
- Wellman, C. H.; Osterloff, P. L.; Mohiuddin, U. (2003). "Fragments of the earliest land plants". Nature 425 (6955): 282–285. Bibcode:2003Natur.425..282W. doi:10.1038/nature01884. PMID 13679913.
- Steemans, P.; Lepota K.; Marshallb, C.P.; Le Hérisséc, A.; Javauxa, E.J. (2010). "FTIR characterisation of the chemical composition of Silurian miospores (cryptospores and trilete spores) from Gotland, Sweden". Review of Palaeobotany and Palynology 162 (4): 577–590. doi:10.1016/j.revpalbo.2010.07.006.
- Kump, L. R.; Pavlov, A.; Arthur, M. A. (2005). "Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia". Geology 33 (5): 397. Bibcode:2005Geo....33..397K. doi:10.1130/G21295.1.
- Butterfield, N. J. (2009). "Oxygen, animals and oceanic ventilation: An alternative view". Geobiology 7 (1): 1–7. doi:10.1111/j.1472-4669.2009.00188.x. PMID 19200141.
- Steemans, P.; Herisse, L.; Melvin, J.; Miller, A.; Paris, F.; Verniers, J.; Wellman, H. (Apr 2009). "Origin and Radiation of the Earliest Vascular Land Plants". Science 324 (5925): 353–353. Bibcode:2009Sci...324..353S. doi:10.1126/science.1169659. ISSN 0036-8075. PMID 19372423.
- Tomescu, A. M. F. (2006). "Wetlands before tracheophytes: Thalloid terrestrial communities of the Early Silurian Passage Creek biota (Virginia)" (PDF). Wetlands Through Time. doi:10.1130/2006.2399(02) (inactive 2015-04-06). ISBN 9780813723990. Retrieved 2014-05-28.
- Tomescu, A. M. F.; Honegger, R.; Rothwell, G. W. (2008). "Earliest fossil record of bacterial–cyanobacterial mat consortia: the early Silurian Passage Creek biota (440 Ma, Virginia, USA)". Geobiology 6 (2): 120–124. doi:10.1111/j.1472-4669.2007.00143.x. PMID 18380874.
- Scott, C.; Glasspool, J. (Jul 2006). "The diversification of Paleozoic fire systems and fluctuations in atmospheric oxygen concentration" (Free full text). Proceedings of the National Academy of Sciences of the United States of America 103 (29): 10861–10865. Bibcode:2006PNAS..10310861S. doi:10.1073/pnas.0604090103. ISSN 0027-8424. PMC 1544139. PMID 16832054.
- Stewart, W.N. and Rothwell, G.W. 1993. Paleobotany and the evolution of plants, Second edition. Cambridge University Press, Cambridge, UK. ISBN 0-521-38294-7
- Bernstein, H; Byers, GS; Michod, RE (1981). "Evolution of sexual reproduction: Importance of DNA repair, complementation, and variation". The American Naturalist 117 (4): 537–549. doi:10.1086/283734.
- Michod, RE; Gayley, TW (1992). "Masking of mutations and the evolution of sex". The American Naturalist 139 (4): 706–734. doi:10.1086/285354.
- Szövényi, Péter; Ricca, Mariana; Hock, Zsófia; Shaw, Jonathan A.; Shimizu, Kentaro K. & Wagner, Andreas (2013). "Selection is no more efficient in haploid than in diploid life stages of an angiosperm and a moss". Molecular Biology and Evolution 30: 1929–39. doi:10.1093/molbev/mst095. PMID 23686659.
- Boyce, C. K. (2008). "How green was Cooksonia? The importance of size in understanding the early evolution of physiology in the vascular plant lineage". Paleobiology 34 (2): 179–194. doi:10.1666/0094-8373(2008)034[0179:HGWCTI]2.0.CO;2. ISSN 0094-8373.
- Kerp, H., Trewin, N. H. & Hass, H. (2004). "New gametophytes from the Early Devonian Rhynie chert". Transactions of the Royal Society of Edinburgh 94: 411–428. doi:10.1017/s026359330000078x.
- Taylor, T. N., Kerp, H. & Hass, H. (2005). "Life history biology of early land plants: deciphering the gametophyte phase". Proceedings of the National Academy of Sciences of the United States of America 102 (16): 5892–5897. Bibcode:2005PNAS..102.5892T. doi:10.1073/pnas.0501985102. PMC 556298. PMID 15809414.
- Sperry, J. S. (2003). "Evolution of Water Transport and Xylem Structure". International Journal of Plant Sciences 164 (3): S115–S127. doi:10.1086/368398. JSTOR 3691719.
- Edwards, D.; Davies, K.L.; Axe, L. (1992). "A vascular conducting strand in the early land plant Cooksonia". Nature 357 (6380): 683–685. Bibcode:1992Natur.357..683E. doi:10.1038/357683a0.
- Niklas, K. J.; Smocovitis, V. (1983). "Evidence for a Conducting Strand in Early Silurian (Llandoverian) Plants: Implications for the Evolution of the Land Plants". Paleobiology 9 (2): 126–137. doi:10.2307/2400461. JSTOR 2400461.
- Niklas, K. J. (1985). "The Evolution of Tracheid Diameter in Early Vascular Plants and Its Implications on the Hydraulic Conductance of the Primary Xylem Strand". Evolution 39 (5): 1110–1122. doi:10.2307/2408738. JSTOR 2408738.
- Niklas, K.; Pratt, L. (1980). "Evidence for lignin-like constituents in Early Silurian (Llandoverian) plant fossils". Science 209 (4454): 396–397. Bibcode:1980Sci...209..396N. doi:10.1126/science.209.4454.396. PMID 17747811.
- Qiu, Y.L.; Li, L.; Wang, B.; Chen, Z.; Knoop, V.; Groth-malonek, M.; Dombrovska, O.; Lee, J.; Kent, L.; Rest, J.; et al. (2006). "The deepest divergences in land plants inferred from phylogenomic evidence". Proceedings of the National Academy of Sciences 103 (42): 15511–6. Bibcode:2006PNAS..10315511Q. doi:10.1073/pnas.0603335103. PMC 1622854. PMID 17030812.
- Stewart, W.N.; Rothwell, G.W. (1993). Paleobiology and the evolution of plants. Cambridge University Press. pp. 521pp.
- Pittermann J.; Sperry J.S.; Hacke U.G.; Wheeler J.K.; Sikkema E.H. (December 2005). "Torus-Margo Pits Help Conifers Compete with Angiosperms". Science Magazine 310 (5756): 1924. doi:10.1126/science.1120479.
- Why Christmas trees are not extinct
- Crane and Kenrick; Kenrick, Paul (1997). "Diverted development of reproductive organs: A source of morphological innovation in land plants". Plant System. And Evol. 206 (1): 161–174. doi:10.1007/BF00987946.
- Zimmermann, W. 1959. Die Phylogenie der Pflanzen. 2nd edition. Stuutgart: Gustav Fischer Verlag.
- Piazza P; et al. (2005). "Evolution of leaf developmental mechanisms". New Phytol. 167 (3): 693–710. doi:10.1111/j.1469-8137.2005.01466.x. PMID 16101907.
- Hagemann, W. 1976. Sind Farne Kormophyten? Eine Alternative zur Telomtheorie. Plant Systematics and Eveolution 124: 251-277.
- Hagemann, W. 1999. Towards an organismic concept of land plants: the marginal blastozone and the development of the vegetation body of selected frondose gametophytes of liverworts and ferns. Planr Systematics and Evolution 216: 81-133.
- Sattler, R. 1992. Process morphology: structural dynamics in development and evolution. Canadian Journal of Botany 70: 708-714.
- Sattler, R. 1998. On the origin of symmetry, branching and phyllotaxis in land plants. In: R.V. Jean and D. Barabé (eds) Symmetry in Plants. World Scientific, Singapore, pp. 775-793.
- James, P. .J. 2009. 'Tree and Leaf': A different angle. The Linnean 25, p. 17.
- Beerling D.; et al. (2001). "Evolution of leaf-form in land plants linked to atmospheric CO2 decline in the Late Palaeozoic era". Nature 410 (6826): 352–354. doi:10.1038/35066546. PMID 11268207.
- A perspective on the CO2 theory of early leaf evolution
- Rickards, R.B. (2000). "The age of the earliest club mosses: the Silurian Baragwanathia flora in Victoria, Australia" (abstract). Geological Magazine 137 (2): 207–209. doi:10.1017/S0016756800003800. Retrieved 2007-10-25.
- Kaplan, D.R. (2001). "The Science of Plant Morphology: Definition, History, and Role in Modern Biology". American Journal of Botany 88 (10): 1711–1741. doi:10.2307/3558347. JSTOR 3558347. PMID 21669604.
- Taylor, T.N.; Hass, H.; Kerp, H.; Krings, M.; Hanlin, R.T. (2005). "Perithecial ascomycetes from the 400 million year old Rhynie chert: an example of ancestral polymorphism" (abstract). Mycologia 97 (1): 269–285. doi:10.3852/mycologia.97.1.269. PMID 16389979. Retrieved 2008-04-07.
- Boyce, C.K.; Knoll, A.H. (2002). "Evolution of developmental potential and the multiple independent origins of leaves in Paleozoic vascular plants". Paleobiology 28 (1): 70–100. doi:10.1666/0094-8373(2002)028<0070:EODPAT>2.0.CO;2. ISSN 0094-8373.
- Hao, S.; Beck, C.B.; Deming, W. (2003). "Structure of the Earliest Leaves: Adaptations to High Concentrations of Atmospheric CO2". International Journal of Plant Sciences 164 (1): 71–75. doi:10.1086/344557.
- Berner, R.A.; Kothavala, Z. (2001). "Geocarb III: A Revised Model of Atmospheric CO2 over Phanerozoic Time" (abstract). American Journal of Science 301 (2): 182–204. doi:10.2475/ajs.301.2.182. Retrieved 2008-04-07.
- Beerling, D.J.; Osborne, C.P.; Chaloner, W.G. (2001). "Evolution of leaf-form in land plants linked to atmospheric CO2 decline in the Late Palaeozoic era". Nature 410 (6826): 287–394. doi:10.1038/35066546. PMID 11268207.
- Taylor, T.N.; Taylor, E.L. (1993). "The biology and evolution of fossil plants".
- Shellito, C.J.; Sloan, L.C. (2006). "Reconstructing a lost Eocene paradise: Part I. Simulating the change in global floral distribution at the initial Eocene thermal maximum". Global and Planetary Change 50 (1–2): 1–17. Bibcode:2006GPC....50....1S. doi:10.1016/j.gloplacha.2005.08.001. Retrieved 2008-04-08.
- Aerts, R. (1995). "The advantages of being evergreen". Trends in Ecology & Evolution 10 (10): 402–407. doi:10.1016/S0169-5347(00)89156-9.
- Labandeira, C.C.; Dilcher, D.L.; Davis, D.R.; Wagner, D.L. (1994). "Ninety-seven million years of angiosperm-insect association: paleobiological insights into the meaning of coevolution". Proceedings of the National Academy of Sciences of the United States of America 91 (25): 12278–12282. Bibcode:1994PNAS...9112278L. doi:10.1073/pnas.91.25.12278. PMC 45420. PMID 11607501.
- Brown V; et al. (1991). "Herbivory and the Evolution of Leaf Size and Shape". Philosophical Transactions of the Royal Society B 333 (1267): 265–272. doi:10.1098/rstb.1991.0076.
- Harrison C. J.; et al. (2005). "Independent recruitment of a conserved developmental mechanism during leaf evolution". Nature 434 (7032): 509–514. Bibcode:2005Natur.434..509H. doi:10.1038/nature03410. PMID 15791256.
- Jackson D., Hake S. (1999). "Control of Phyllotaxy in Maize by the ABPHYL1 Gene". Development 126 (2): 315–323. PMID 9847245.
- Cronk Q. (2001). "Plant evolution and development in a post-genomic context". Nature Reviews Genetics 2 (8): 607–619. doi:10.1038/35084556. PMID 11483985.
- Tattersall; et al. (2005). "The Mutant crispa Reveals Multiple Roles for PHANTASTICA in Pea Compound Leaf Development". Plant Cell 17 (4): 1046–1060. doi:10.1105/tpc.104.029447. PMC 1087985. PMID 15749758.
- Bharathan and Sinha; Sinha, NR (Dec 2001). "The Regulation of Compound Leaf Development". Plant Physiol. 127 (4): 1533–1538. doi:10.1104/pp.010867. PMC 1540187. PMID 11743098.
- Nath U; et al. (2003). "Genetic Control of Surface Curvature". Science 299 (5611): 1404–1407. doi:10.1126/science.1079354. PMID 12610308.
- Boyce, K.C.; Hotton, C.L.; Fogel, M.L.; Cody, G.D.; Hazen, R.M.; Knoll, A.H.; Hueber, F.M. (May 2007). "Devonian landscape heterogeneity recorded by a giant fungus" (PDF). Geology 35 (5): 399–402. Bibcode:2007Geo....35..399B. doi:10.1130/G23384A.1.
- Stein, W.E.; Mannolini, F.; Hernick, L.V.; Landing, E.; Berry, C.M. (2007). "Giant cladoxylopsid trees resolve the enigma of the Earth's earliest forest stumps at Gilboa". Nature 446 (7138): 904–7. Bibcode:2007Natur.446..904S. doi:10.1038/nature05705. PMID 17443185.
- Retallack, G.J.; Catt, J.A.; Chaloner, W.G. (1985). "Fossil Soils as Grounds for Interpreting the Advent of Large Plants and Animals on Land [and Discussion]". Philosophical Transactions of the Royal Society B 309 (1138): 105–142. Bibcode:1985RSPTB.309..105R. doi:10.1098/rstb.1985.0074. JSTOR 2396355.
- Dannenhoffer, J.M.; Bonamo, P.M. (1989). "Rellimia thomsonii from the Givetian of New York: Secondary Growth in Three Orders of Branching". American Journal of Botany 76 (9): 1312–1325. doi:10.2307/2444557. JSTOR 2444557.
- Davis, P; Kenrick, P. (2004). Fossil Plants. Smithsonian Books, Washington D.C. ISBN 1-58834-156-9.
- Donoghue, M.J. (2005). "Key innovations, convergence, and success: macroevolutionary lessons from plant phylogeny" (abstract). Paleobiology 31 (2): 77–93. doi:10.1666/0094-8373(2005)031[0077:KICASM]2.0.CO;2. ISSN 0094-8373. Retrieved 2008-04-07.
- Bowe, L.M.; Coat, G.; Depamphilis, C.W. (2000). "Phylogeny of seed plants based on all three genomic compartments: Extant gymnosperms are monophyletic and Gnetales' closest relatives are conifers". Proceedings of the National Academy of Sciences 97 (8): 4092–7. Bibcode:2000PNAS...97.4092B. doi:10.1073/pnas.97.8.4092. PMC 18159. PMID 10760278.
- Chaw, S.M.; Parkinson, C.L.; Cheng, Y.; Vincent, T.M.; Palmer, J.D. (2000). "Seed plant phylogeny inferred from all three plant genomes: Monophyly of extant gymnosperms and origin of Gnetales from conifers". Proceedings of the National Academy of Sciences 97 (8): 4086–91. Bibcode:2000PNAS...97.4086C. doi:10.1073/pnas.97.8.4086. PMC 18157. PMID 10760277.
- Soltis, D.E.; Soltis, P.S.; Zanis, M.J. (2002). "Phylogeny of seed plants based on evidence from eight genes" (abstract). American Journal of Botany 89 (10): 1670–81. doi:10.3732/ajb.89.10.1670. PMID 21665594. Retrieved 2008-04-08.
- Friis, E.M.; Pedersen, K.R.; Crane, P.R. (2006). "Cretaceous angiosperm flowers: Innovation and evolution in plant reproduction". Palaeogeography, Palaeoclimatology, Palaeoecology 232 (2–4): 251–293. doi:10.1016/j.palaeo.2005.07.006.
- Hilton, J.; Bateman, R.M. (2006). "Pteridosperms are the backbone of seed-plant phylogeny". The Journal of the Torrey Botanical Society 133 (1): 119–168. doi:10.3159/1095-5674(2006)133[119:PATBOS]2.0.CO;2. ISSN 1095-5674.
- Bateman, R.M.; Hilton, J.; Rudall, P.J. (2006). "Morphological and molecular phylogenetic context of the angiosperms: contrasting the 'top-down' and 'bottom-up' approaches used to infer the likely characteristics of the first flowers". Journal of Experimental Botany 57 (13): 3471–503. doi:10.1093/jxb/erl128. PMID 17056677.
- Frohlich, M.W.; Chase, M.W. (2007). "After a dozen years of progress the origin of angiosperms is still a great mystery". Nature 450 (7173): 1184–9. Bibcode:2007Natur.450.1184F. doi:10.1038/nature06393. PMID 18097399.
- Mora, C.I.; Driese, S.G.; Colarusso, L.A. (1996). "Middle to Late Paleozoic Atmospheric CO2 Levels from Soil Carbonate and Organic Matter". Science 271 (5252): 1105–1107. Bibcode:1996Sci...271.1105M. doi:10.1126/science.271.5252.1105.
- Berner, R.A. (1994). "GEOCARB II: A revised model of atmospheric CO2 over Phanerozoic time". Am. J. Sci 294 (1): 56–91. doi:10.2475/ajs.294.1.56.
- Algeo, T.J.; Berner, R.A.; Maynard, J.B.; Scheckler, S.E.; Archives, G.S.A.T. (1995). "Late Devonian Oceanic Anoxic Events and Biotic Crises: "Rooted" in the Evolution of Vascular Land Plants?". GSA Today 5 (3).
- Retallack, G. J. (1986). Wright, V. P., ed. Paleosols: their Recognition and Interpretation. Oxford: Blackwell.
- Algeo, T.J.; Scheckler, S.E. (1998). "Terrestrial-marine teleconnections in the Devonian: links between the evolution of land plants, weathering processes, and marine anoxic events". Philosophical Transactions of the Royal Society B 353 (1365): 113–130. doi:10.1098/rstb.1998.0195.
- Coudert, Yoan; Anne Dievart; Gaetan Droc; Pascal Gantet (2012). "ASL/LBD Phylogeny Suggests that Genetic Mechanisms of Root Initiation Downstream of Auxin Are Distinct in Lycophytes and Euphyllophytes". Molecular Biology and Evolution 30 (3): 569–72. doi:10.1093/molbev/mss250. ISSN 0737-4038. PMID 23112232.
- Kenrick, P.; Crane, P.R. (1997). "The origin and early evolution of plants on land". Nature 389 (6646): 33–39. Bibcode:1997Natur.389...33K. doi:10.1038/37918.
- Schüßler, A.; et al. (2001). "A new fungal phylum, the Glomeromycota: phylogeny and evolution". Mycol. Res. 105 (12): 1416. doi:10.1017/S0953756201005196.
- Simon, L.; Bousquet, J.; Lévesque, R. C.; Lalonde, M. (1993). "Origin and diversification of endomycorrhizal fungi and coincidence with vascular land plants". Nature 363 (6424): 67. Bibcode:1993Natur.363...67S. doi:10.1038/363067a0.
- Brundrett, M. C.; Franks, P. J.; Rees, M.; Bidartondo, M. I.; Leake, J. R.; Beerling, D. J. (2002). "Coevolution of roots and mycorrhizas of land plants". New Phytologist 154 (8): 275. Bibcode:2010NatCo...1E.103H. doi:10.1038/ncomms1105.
- Remy, W.; Taylor, T. N.; Hass, H.; Kerp, H. (1994). "Four Hundred-Million-Year-Old Vesicular Arbuscular Mycorrhizae". Proceedings of the National Academy of Sciences 91 (25): 11841–11843. Bibcode:1994PNAS...9111841R. doi:10.1073/pnas.91.25.11841. PMC 45331. PMID 11607500.
- Brundrett, M. C. (2002). "Coevolution of roots and mycorrhizas of land plants". New Phytologist 154 (2): 275–304. doi:10.1046/j.1469-8137.2002.00397.x.
- "Science Magazine". Runcaria, a Middle Devonian Seed Plant Precursor. American Association for the Advancement of Science. 2011. Retrieved March 22, 2011.
- Mapes, G.; Rothwell, G.W.; Haworth, M.T. (1989). "Evolution of seed dormancy". Nature 337 (6208): 645–646. Bibcode:1989Natur.337..645M. doi:10.1038/337645a0.
- Bawa, KS; Webb, CJ (1984). "Flower, fruit and seed abortion in tropical forest trees. Implications for the evolution of paternal and maternal reproductive patterns". American Journal of Botany 71 (5): 736–751. doi:10.2307/2443371.
- Cheah KS, Osborne DJ (April 1978). "DNA lesions occur with loss of viability in embryos of ageing rye seed". Nature 272 (5654): 593–9. Bibcode:1978Natur.272..593C. doi:10.1038/272593a0. PMID 19213149.
- Koppen G, Verschaeve L (2001). "The alkaline single-cell gel electrophoresis/comet assay: a way to study DNA repair in radicle cells of germinating Vicia faba". Folia Biol. (Praha) 47 (2): 50–4. PMID 11321247.
- Bray CM, West CE (December 2005). "DNA repair mechanisms in plants: crucial sensors and effectors for the maintenance of genome integrity". New Phytol. 168 (3): 511–28. doi:10.1111/j.1469-8137.2005.01548.x. PMID 16313635.
- Bernstein C, Bernstein H. (1991) Aging, Sex, and DNA Repair. Academic Press, San Diego. ISBN 0120928604 ISBN 978-0120928606
- Feild, T. S.; Brodribb, T. J.; Iglesias, A.; Chatelet, D. S.; Baresch, A.; Upchurch, G. R.; Gomez, B.; Mohr, B. A. R.; Coiffard, C.; Kvacek, J.; Jaramillo, C. (2011). "Fossil evidence for Cretaceous escalation in angiosperm leaf vein evolution". Proceedings of the National Academy of Sciences 108 (20): 8363–8366. doi:10.1073/pnas.1014456108.
- Lawton-Rauh A.; Alvarez-Buylla, ER; Purugganan, MD (2000). "Molecular evolution of flower development". Trends in Ecology and Evolution 15 (4): 144–149. doi:10.1016/S0169-5347(99)01816-9. PMID 10717683.
- Nam, J.; Depamphilis, CW; Ma, H; Nei, M (2003). "Antiquity and Evolution of the MADS-Box Gene Family Controlling Flower Development in Plants". Mol. Biol. Evol. 20 (9): 1435–1447. doi:10.1093/molbev/msg152. PMID 12777513.
- Crepet, W. L. (2000). "Progress in understanding angiosperm history, success, and relationships: Darwin's abominably "perplexing phenomenon"". Proceedings of the National Academy of Sciences 97 (24): 12939–41. doi:10.1073/pnas.97.24.12939. PMC 34068. PMID 11087846.
- Sun, G.; Ji, Q.; Dilcher, D.L.; Zheng, S.; Nixon, K.C.; Wang, X. (2002). "Archaefructaceae, a New Basal Angiosperm Family". Science 296 (5569): 899–904. Bibcode:2002Sci...296..899S. doi:10.1126/science.1069439. PMID 11988572.
- In fact, Archaeofructus probably didn't bear true flowers: see
- Wing, S.L.; Boucher, L.D. (1998). "Ecological Aspects Of The Cretaceous Flowering Plant Radiation". Annual Reviews in Earth and Planetary Sciences 26 (1): 379–421. Bibcode:1998AREPS..26..379W. doi:10.1146/annurev.earth.26.1.379.
- Wilson Nichols Stewart & Gar W. Rothwell, Paleobotany and the evolution of plants, 2nd ed., Cambridge Univ. Press 1993, p. 498
- Feild, T.S.; Arens, N.C.; Doyle, J.A.; Dawson, T.E.; Donoghue, M.J. (2004). "Dark and disturbed: a new image of early angiosperm ecology" (abstract). Paleobiology 30 (1): 82–107. doi:10.1666/0094-8373(2004)030<0082:DADANI>2.0.CO;2. ISSN 0094-8373. Retrieved 2008-04-08.
- Hilton J. and R.M. Bateman 2006. Pteridosperms are the backbone of seed plant phylogeny. J. Torrey Bot. Soc. 133(1):119-168.
- Zuccolo, A.; et al. (2011). "A physical map for the Amborella trichopoda genome sheds light on the evolution of angiosperm genome structure". Genome Biology 12 (5): R48. doi:10.1186/gb-2011-12-5-r48.
- Day, R.T. 2011a The Flowering Leaf Theory: New Evidence for the Evolution of Flowers. The Osprey, Nature Newfoundland & Labrador, Canada. 42(3):16, 24. Day, R.T. 2011b A New Way to View Flowers, What Darwin Didn't Know. Sarracenia magazine, Newfoundland Wildflower Society, Canada. 19(3):23-24.
- Medarg NG and Yanofsky M (March 2001). "Function and evolution of the plant MADS-box gene family". Nature Reviews Genetics 2 (3): 186–195. doi:10.1038/35056041.
- Jager; et al. (2003). "MADS-Box Genes in Ginkgo biloba and the Evolution of the AGAMOUS Family". Mol. Biol. And Evol. 20 (5): 842–854. doi:10.1093/molbev/msg089. PMID 12679535.
- Translational Biology: From Arabidopsis Flowers to Grass Inflorescence Architecture. Beth E. Thompson* and Sarah Hake, 2009, Plant Physiology 149:38-45.
- Lawton-Rauh A.; et al. (2000). "Molecular evolution of flower development". Trends in Ecol. And Evol. 15 (4): 144–149. doi:10.1016/S0169-5347(99)01816-9. PMID 10717683.
- Kitahara K and Matsumoto S. (2000). "Rose MADS-box genes 'MASAKO C1 and D1' homologous to class C floral identity genes". Plant Science 151 (2): 121–134. doi:10.1016/S0168-9452(99)00206-X. PMID 10808068.
- Kater M; et al. (1998). "Multiple AGAMOUS Homologs from Cucumber and Petunia Differ in Their Ability to Induce Reproductive Organ Fate". Plant Cell 10 (2): 171–182. doi:10.1105/tpc.10.2.171. PMC 143982. PMID 9490741.
- Soltis D; et al. (2007). "The floral genome: an evolutionary history of gene duplication and shifting patterns of gene expression". Trends in Plant Sci. 12 (8): 358–367. doi:10.1016/j.tplants.2007.06.012.
- Putterhill; et al. (2004). "It's time to flower: the genetic control of flowering time". BioEssays 26 (4): 353–363. doi:10.1002/bies.20021. PMID 15057934.
- Blazquez; et al. (2001). "Flowering on time: genes that regulate the floral transition". EMBO Reports 2 (12): 1078–1082. doi:10.1093/embo-reports/kve254. PMC 1084172. PMID 11743019.
- Lawton-Rauh A.; et al. (2000). "The Mostly Male Theory of Flower Evolutionary Origins: from Genes to Fossils". Sys.Botany (American Society of Plant Taxonomists) 25 (2): 155–170. doi:10.2307/2666635. JSTOR 2666635.
- Day 2011a,b
- Day, R.T. 2011a The Flowering Leaf Theory: New Evidence for the Evolution of Flowers. The Osprey, Nature Newfoundland & Labrador, Canada 42(3):16, 24. Day, R.T. 2011b A New Way to View Flowers, What Darwin Didn't Know. Sarracenia magazine, Newfoundland Wildflower Society,Canada. 19(3):23-24.
- Williams BP, Johnston IG, Covshoff S, Hibberd JM (September 2013). "Phenotypic landscape inference reveals multiple evolutionary paths to C₄ photosynthesis". eLife 2: e00961. doi:10.7554/eLife.00961.
- Christin, P. -A.; Osborne, C. P.; Chatelet, D. S.; Columbus, J. T.; Besnard, G.; Hodkinson, T. R.; Garrison, L. M.; Vorontsova, M. S.; Edwards, E. J. (2012). "Anatomical enablers and the evolution of C4 photosynthesis in grasses". Proceedings of the National Academy of Sciences 110 (4): 1381. Bibcode:2013PNAS..110.1381C. doi:10.1073/pnas.1216777110.
- Osborne, C.P.; Beerling, D.J. (2006). "Review. Nature's green revolution: the remarkable evolutionary rise of C4 plants" (PDF). Philosophical Transactions of the Royal Society B 361 (1465): 173–194. doi:10.1098/rstb.2005.1737. PMC 1626541. PMID 16553316. Retrieved 2008-02-11.
- Retallack, G. J. (1 August 1997). "Neogene Expansion of the North American Prairie". PALAIOS 12 (4): 380–390. doi:10.2307/3515337. ISSN 0883-1351. JSTOR 3515337.
- Thomasson, J.R.; Nelson, M.E.; Zakrzewski, R.J. (1986). "A Fossil Grass (Gramineae: Chloridoideae) from the Miocene with Kranz Anatomy". Science 233 (4766): 876–878. Bibcode:1986Sci...233..876T. doi:10.1126/science.233.4766.876. PMID 17752216.
- O'Leary, Marion (May 1988). "Carbon Isotopes in Photosynthesis". BioScience 38 (5): 328–336. doi:10.2307/1310735. JSTOR 1310735.
- Osborne, P.; Freckleton, P. (Feb 2009). "Ecological selection pressures for C4 photosynthesis in the grasses". Proceedings. Biological sciences / the Royal Society 276 (1663): 1753–1760. doi:10.1098/rspb.2008.1762. ISSN 0962-8452. PMC 2674487. PMID 19324795.
- Bond, W.J.; Woodward, F.I.; Midgley, G.F. (2005). "The global distribution of ecosystems in a world without fire". New Phytologist 165 (2): 525–538. doi:10.1111/j.1469-8137.2004.01252.x. PMID 15720663.
- Above 35% atmospheric oxygen, the spread of fire is unstoppable. Many models have predicted higher values and had to be revised, because there was not a total extinction of plant life.
- Pagani, M.; Zachos, J.C.; Freeman, K.H.; Tipple, B.; Bohaty, S. (2005). "Marked Decline in Atmospheric Carbon Dioxide Concentrations During the Paleogene". Science 309 (5734): 600–603. Bibcode:2005Sci...309..600P. doi:10.1126/science.1110063. PMID 15961630.
- Piperno, D.R.; Sues, H.D. (2005). "Dinosaurs Dined on Grass". Science 310 (5751): 1126–8. doi:10.1126/science.1121020. PMID 16293745.
- Prasad, V.; Stroemberg, C.A.E.; Alimohammadian, H.; Sahni, A. (2005). "Dinosaur Coprolites and the Early Evolution of Grasses and Grazers". Science(Washington) 310 (5751): 1177–1180. Bibcode:2005Sci...310.1177P. doi:10.1126/science.1118806. PMID 16293759.
- Keeley, J.E.; Rundel, P.W. (2005). "Fire and the Miocene expansion of C4 grasslands". Ecology Letters 8 (7): 683–690. doi:10.1111/j.1461-0248.2005.00767.x.
- Retallack, G.J. (2001). "Cenozoic Expansion of Grasslands and Climatic Cooling". The Journal of Geology 109 (4): 407–426. Bibcode:2001JG....109..407R. doi:10.1086/320791.
- Pichersky E. and Gang D. (2000). "Genetics and biochemistry of secondary metabolites in plants: an evolutionary perspective". Trends in Plant Sci 5 (10): 439–445. doi:10.1016/S1360-1385(00)01741-6.
- Nina Theis and Manuel Lerdau (2003). "The evolution of function in plant secondary metabolites". Int. J.Plant. Sci 164 (S3): S93–S102. doi:10.1086/374190.
- Bohlmann J.; et al. (1998). "Plant terpenoid synthases: molecular and phylogenetic analysis". Proc. Natl. Acad. Sci. U.S.A. 95 (8): 4126–4133. Bibcode:1998PNAS...95.4126B. doi:10.1073/pnas.95.8.4126. PMC 22453. PMID 9539701.
- Li A and Mao L. (2007). "Evolution of plant microRNA gene families". Cell Research 17 (3): 212–218. doi:10.1038/sj.cr.7310113. PMID 17130846.
- Doebley J.F. (2004). "The genetics of maize evolution". Ann. Rev. Gen 38 (1): 37–59. doi:10.1146/annurev.genet.38.072902.092425. PMID 15568971.
- Purugannan; et al. (2000). "Variation and Selection at the CAULIFLOWER Floral Homeotic Gene Accompanying the Evolution of Domesticated Brassica olerace". Genetics 155 (2): 855–862. PMC 1461124. PMID 10835404.
- Capelle, J.; Neema, C. (Nov 2005). "Local adaptation and population structure at a micro-geographical scale of a fungal parasite on its host plant". Journal of Evolutionary Biology 18 (6): 1445–1454. doi:10.1111/j.1420-9101.2005.00951.x.
- Burdon, J.J.; Thrall, P.H. (2009). "Co-evolution of plants and their pathogen in natural habitats". Science 324 (5928): 755–756. Bibcode:2009Sci...324..755B. doi:10.1126/science.1171663.
- Sahai, A.S.; Manocha, M.S. (1992). "Chitinases of fungi and plants: their involvement in morphogenesis and host-parasite interaction". FEMS Microbiology Reviews 11 (4): 317–338. doi:10.1111/j.1574-6976.1993.tb00004.x.
- Clay, Keith (1991). "Parasitic castration of plants by fungi". Trends in Ecology&Evolution 6 (5): 162–166. doi:10.1016/0169-5347(91)90058-6.
- Zhan, J.; Mundt, C.C.; Hoffer, M.E.; McDonald, B.A. (2002). "Local adaptation and effect of host genotype on the rate of pathogen evolution: an experimental test in a plant pathosystem". Journal of Evolutionary Biology 15 (4): 634–647. doi:10.1046/j.1420-9101.2002.00428.x.
- Capelle, J.; Neema, C. (2005). "Local adaptation and population structure at a micro-geographical scale of fungal parasite on its host plant". Journal of Evolutionary Biology 18 (6): 1445–1454. doi:10.1111/j.1420-9101.2005.00951.x.
- Delmotte, F.; Bucheli, E.; Shykoff, J.A. (1999). "Host and parasite population structure in a natural plant-pathogen system". Heredity 82: 300–308. doi:10.1038/sj.hdy.6884850.
- Desprez-Loustau, M.-L.; Vitasse, Y.; Delzon, S.; Capdevielle, X.; Marcais, B.; Kremer, A. (2010). "Are plant pathogen populations adapted for encounter with their host? A case study of phenological synchrony between oak and an obligate fungal parasite along an altitudinal gradient". Journal of Evolutionary Biology 23 (1): 87–97. doi:10.1111/j.1420-9101.2009.01881.x.
|
2 Answers | Add Yours
Considering as example the single variable function, you should remember that a function uniquely relates real values such that the result is a set of ordered pairs `(x,f(x)).`
The relation between inputs and outputs is given by an equation called the equation of function.
Since the outputs are also real values, hence, reasoning by analogy, you may use the arithmetical operation between functions (addition, subtraction, multiplication, division) to produce new functions.
Addition of two functions f(x) and g(x) is defined `f(x)+g(x) = (f+g)(x)` .
Subtraction of two functions f(x) and g(x) is defined `f(x)-g(x) = (f-g)(x).`
Multiplication of two functions f(x) and g(x) is defined `f(x)*g(x) = (f*g)(x)` .
Division of two functions f(x) and g(x) is defined `f(x)/g(x) = (f/g)(x).`
Considering as example f(x) = x-1, g(x) = x+1, you may use the arithmetical operations in case of these functions such that:
`f(x)+g(x) = x-1+x+1 = 2x`
Notice that giving values to x, the result is also a value.
`f(x)-g(x) = x-1-x-1 = -2`
`f(x)*g(x) = (x-1)(x+1) = x^2-1`
`f(x)/(g(x)) = (x-1)/(x+1)`
Hence, since the functions operate with real numbers then the arithmetical operation hold for functions.
You must understand first what is a function ! Suppose f(x) is a function of x, then what is the meaning of it ! It means, you take a real number "x" and put it in front of "f" then "f" applies a rule and gives another real number, in that sense f(x) is again a real number, mathematically:
f : x ---> y, x,y belongs to real numbers.
as for example f(x) = x^2, here "f" is a rule that "squares" a real number "x" when given to it:
f: x ----- squares-----> y = x^2
so for x = 2, f(x) = 4.
So f(x) is a number also, so all the operations those apply to a real number also apply to a function of a real number.
We’ve answered 319,206 questions. We can answer yours, too.Ask a question
|
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal, that is it switches the row and column indices of the matrix by producing another matrix denoted as AT (also written A′, Atr, tA or At). It is achieved by any one of the following equivalent actions:
- reflect A over its main diagonal (which runs from top-left to bottom-right) to obtain AT,
- write the rows of A as the columns of AT,
- write the columns of A as the rows of AT.
Formally, the i-th row, j-th column element of AT is the j-th row, i-th column element of A:
If A is an m × n matrix, then AT is an n × m matrix. To avoid confusing the reader between the transpose operation and a matrix raised to the tth power, the symbol denotes the transpose operation.
For matrices A, B and scalar c we have the following properties of transpose:
- The transpose respects addition.
- Note that the order of the factors reverses. From this one can deduce that a square matrix A is invertible if and only if AT is invertible, and in this case we have (A−1)T = (AT)−1. By induction this result extends to the general case of multiple matrices, where we find that (A1A2...Ak−1Ak)T = AkTAk−1T…A2TA1T.
- The transpose of a scalar is the same scalar. Together with (2), this states that the transpose is a linear map from the space of m × n matrices to the space of all n × m matrices.
- The determinant of a square matrix is the same as the determinant of its transpose.
- The dot product of two column vectors a and b can be computed as the single entry of the matrix product:
- If A has only real entries, then ATA is a positive-semidefinite matrix.
- The transpose of an invertible matrix is also invertible, and its inverse is the transpose of the inverse of the original matrix. The notation A−T is sometimes used to represent either of these equivalent expressions.
- If A is a square matrix, then its eigenvalues are equal to the eigenvalues of its transpose, since they share the same characteristic polynomial.
Special transpose matricesEdit
A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, A is symmetric if
A square matrix whose transpose is equal to its negative is called a skew-symmetric matrix; that is, A is skew-symmetric if
A square complex matrix whose transpose is equal to the matrix with every entry replaced by its complex conjugate (denoted here with an overline) is called a Hermitian matrix (equivalent to the matrix being equal to its conjugate transpose); that is, A is Hermitian if
A square complex matrix whose transpose is equal to its conjugate inverse is called a unitary matrix; that is, A is unitary if
If A is an m × n matrix and AT is its transpose, then the result of matrix multiplication with these two matrices gives two square matrices: A AT is m × m and AT A is n × n. Furthermore, these products are symmetric matrices. Indeed, the matrix product A AT has entries that are the inner product of a row of A with a column of AT. But the columns of AT are the rows of A, so the entry corresponds to the inner product of two rows of A. If pi j is the entry of the product, it is obtained from rows i and j in A. The entry pj i is also obtained from these rows, thus pi j = pj i, and the product matrix (pi j) is symmetric. Similarly, the product AT A is a symmetric matrix.
A quick proof of the symmetry of A AT results from the fact that it is its own transpose:
Transpose of a linear mapEdit
The transpose may be defined more generally:
Equivalently, the transpose tf is defined by the relation
The definition of the transpose may be seen to be independent of any bilinear form on the vector spaces, unlike the adjoint (below).
Transpose of a bilinear formEdit
Every linear map to the dual space f : V → V∗ defines a bilinear form B : V × V → F, with the relation B(v, w) = f(v)(w). By defining the transpose of this bilinear form as the bilinear form tB defined by the transpose tf : V∗∗ → V∗ i.e. tB(w, v) = tf(Ψ(w))(v), we find that B(v, w) = tB(w, v). Here, Ψ is the natural homomorphism V → V∗∗ into the double dual.
These bilinear forms define an isomorphism between V and V∗, and between W and W∗, resulting in an isomorphism between the transpose and adjoint of f. The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors use the term transpose to refer to the adjoint as defined here.
The adjoint allows us to consider whether g : W → V is equal to f −1 : W → V. In particular, this allows the orthogonal group over a vector space V with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps V → V for which the adjoint equals the inverse.
Over a complex vector space, one often works with sesquilinear forms (conjugate-linear in one argument) instead of bilinear forms. The Hermitian adjoint of a map between such spaces is defined similarly, and the matrix of the Hermitian adjoint is given by the conjugate transpose matrix if the bases are orthonormal.
Implementation of matrix transposition on computersEdit
On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality.
Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an n × m matrix in-place, with O(1) additional storage or at most storage much less than mn. For n ≠ m, this involves a complicated permutation of the data elements that is non-trivial to implement in-place. Therefore, efficient in-place matrix transposition has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed.
- T.A. Whitelaw (1 April 1991). Introduction to Linear Algebra, 2nd edition. CRC Press. ISBN 978-0-7514-0159-2.
- Arthur Cayley (1858) "A memoir on the theory of matrices", Philosophical Transactions of the Royal Society of London, 148 : 17–37. The transpose (or "transposition") is defined on page 31.
- Gilbert Strang (2006) Linear Algebra and its Applications 4th edition, page 51, Thomson Brooks/Cole ISBN 0-03-010567-6
- Bourbaki, "II §2.5", Algebra I
|
The physics driving supermassive black holes are difficult to fathom even for scientists who devote their lives to studying such objects. When you add a second black hole, things get even harder to follow. Scientists have never been able to observe the collision of two black holes, but a new simulation from NASA’s Goddard Space Flight Center could offer some clarity on the physics involved.
It’s well-established at this point that large galaxies have supermassive black holes in the center. We also know that galaxies in the universe regularly merge. Yet, we see very few galaxies that have two giant black holes in the center. Those we do see aren’t close enough that their gravitational fields interact, making it difficult to identify merging black holes from light alone — we don’t know what to look for, but the new Goddard simulation could help.
The gravitational waves from smaller black holes merging have been confirmed with instruments like the National Science Foundation’s Laser Interferometer Gravitational-Wave Observatory (LIGO). A supermassive black hole merger would be much more distant, so we can’t rely on gravitational waves to pinpoint them. Earth is too noisy to pick up the signal. We do know something about the emissions from gas orbiting supermassive black holes, and that’s where Goddard researchers have focused their attention.
Supermassive black holes should pull clouds of superheated gas along with them when they merge, and even more gas would end up around a black hole if two galaxies merge. Researchers modeled two supermassive black holes orbiting each other three times to determine how that gas would behave shortly before a collision. They found that this stage of the process would be dominated by intense emissions of UV and X-rays from gas in three distinct regions. There would be a cooler ring of gas around the pair of black holes, as well as smaller, hotter discs circling each individual singularity. A stream of gas would also feed the smaller discs from the surrounding halo.
As matter flows into the black holes, the simulation predicts the UV light would interact with the black hole’s corona to produce higher X-ray emissions. With a lower rate, the UV light would dim. Scientists expect the X-ray emissions from a pair of merging black holes would be significantly brighter than either one could produce on its own.
It took the Blue Waters supercomputer 46 days to produce this simulation with its 9,600 CPU cores, and it’s not even complete. NASA didn’t attempt to model the center of gravity between the two orbiting masses. It’s just a black circle in the animation. There’s still a lot to learn.
NASA: Asteroid Could Still Hit Earth in 2068
This skyscraper-sized asteroid might still hit Earth in 2068, according to a new analysis from the University of Hawaii and NASA’s Jet Propulsion Laboratory.
NASA Created a Collection of Spooky Space Sounds for Halloween
NASA's latest data release turns signals from beyond Earth into spooky sounds that are sure to send a chill up your spine.
NASA Discovers Vital Organic Molecule on Titan
In the latest analysis, researchers from NASA have identified an important, highly reactive organic molecule in Titan's atmosphere. Its presence suggests the moon could support chemical processes that we usually associate with life.
How to Build a Face Mask Detector With a Jetson Nano 2GB and AlwaysAI
Nvidia continues to make AI at the edge more affordable and easier to deploy. So instead of simply running through the benchmarks to review the new Jetson Nano 2GB, I decided to tackle the DIY project of building my own face mask detector.
|
voir la définition de Wikipedia
Numerical weather prediction uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes or weather satellites as inputs to the models.
Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed for significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.
Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to about only six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Although post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions, a more fundamental problem lies in the chaotic nature of the partial differential equations used to simulate the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models.
The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures originally developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950; in 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e. routine predictions for practical use). Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy and Weather Bureau. In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model. Following Phillips' work, several groups began working to create general circulation models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.
As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s. By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts.
The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS). Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible.
The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to 1 kilometer (0.6 mi) globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. Some global models use finite differences, in which the world is represented as discrete points on a regularly spaced grid of latitude and longitude; other models use spectral methods that solve for a range of wavelengths. The data are then used in the model as the starting point for a forecast.
A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.
An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical.
These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a time step. The equations are then applied to this new atmospheric state to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified Model is run six days into the future, while the European Centre for Medium-Range Weather Forecasts' Integrated Forecast System and Environment Canada's Global Environmental Multiscale Model both run out to ten days into the future, and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future. The visual output produced by a model solution is known as a prognostic chart, or prog.
Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. Parameterization is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between 5 kilometers (3 mi) and 300 kilometers (200 mi) in length. A typical cumulus cloud has a scale of less than 1 kilometer (0.6 mi), and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air in a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sides between 5 and 25 kilometers (3 and 16 mi) can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale. The formation of large-scale (stratus-type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. Sub-grid scale processes need to be taken into account. Rather than assuming that clouds form at 100% relative humidity, the cloud fraction can be related a critical value of relative humidity less than 100%, reflecting the sub grid scale variation that occurs in the real world.
The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag. This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes. Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes.
The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself.
The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about 5,500 m (18,000 ft)) level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates. This coordinate system receives its name from the independent variable used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain.
Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s.
Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness and surface winds.
In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days, making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill. Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. These uncertainties limit forecast model accuracy to about five or six days into the future.
Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere. Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere.
Since the 1990s, ensemble forecasts have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions. Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. The ECMWF model, the Ensemble Prediction System, uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding.
In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty; this problem becomes particularly severe for forecasts of the weather about ten days in advance. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general. Despite this perception, a spread-skill relationship is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7. The relationship between ensemble spread and forecast skill varies substantially depending on such factors as the forecast model and the region for which the forecast is made.
In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting, and it has been shown to improve forecasts when compared to a single model-based approach. Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting. This type of forecast significantly reduces errors in model output.
Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their transport, or mean velocity of movement through the atmosphere, their diffusion, chemical transformation, and ground deposition. In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface, which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts.
A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the general circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change. For example, they can be used to simulate the El Niño-Southern Oscillation and study its forcings on global climate and the Asian monsoon circulation. For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate. Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. When run for multiple decades, the models use a low resolution, which leaves smaller-scale interactions unresolved.
The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface.
Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models.
In 1978, the first hurricane-tracking model based on atmospheric dynamics—the movable fine-mesh (MFM) model—began operating. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance.
On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose, or wood fuels, in wildfires. When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion. When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere.
A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction-diffusion systems of partial differential equations. More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated. The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under 1 kilometer (0.6 mi), forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally. Although models such as Los Alamos' FIRETEC solve for the concentrations of fuel and oxygen, the computational grid cannot be fine enough to resolve the combustion reaction, so approximations must be made for the temperature distribution within each grid cell, as well as for the combustion reaction rates themselves.
Contenu de sensagent
dictionnaire et traducteur pour sites web
Solution commerce électronique
Augmenter le contenu de votre site
Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.
Parcourir les produits et les annonces
Obtenir des informations en XML pour filtrer le meilleur contenu.
Indexer des images et définir des méta-données
Fixer la signification de chaque méta-donnée (multilingue).
Renseignements suite à un email de description de votre projet.
Jeux de lettres
Dictionnaire de la langue française
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
calculé en 0,078s
|
As the stars move across the sky each night people of the world have looked up and wondered about their place in the universe. Throughout history civilizations have developed unique systems for ordering and understanding the heavens. Babylonian and Egyptian astronomers developed systems that became the basis for Greek astronomy, while societies in the Americas, China and India developed their own.
Ancient Greek astronomers’ work is richly documented, largely because of the way the Greek tradition of inquiry was continued by the work of Islamic astronomers and then into early modern European astronomy.
The Sphere of the World
By the 5th century B.C., it was widely accepted that the Earth is a sphere. This is a critical point, as there is a widespread misconception that ancient peoples thought the Earth was flat. This was simply not the case.
In the 5th century B.C., Empedocles and Anaxagoras offered arguments for the spherical nature of the Earth. During a lunar eclipse, when the Earth is between the sun and the moon, they identified the shadow of the Earth on the moon. As the shadow moves across the moon it is clearly round. This would suggest that the Earth is a sphere.
Experiencing the Sphere of the Earth
Given that opportunities for observations of a lunar eclipse do not come along that often, there was also evidence of the roundness of the earth in the experiences of sailors.
When a ship appears on the horizon it’s the top of the ship that is visible first. A wide range of astronomy texts over time use this as a way to illustrate the roundness of the Earth. As the image suggests this is exactly what one would expect on a spherical Earth. If the Earth were flat, it would be expected that you would be able to see the entire ship as soon as it became visible.
Measuring the Size of the Earth
Lunar eclipses also allowed for another key understanding about our home here on Earth. In 3rd Century B.C., Aristarchus of Samos reasoned he could figure out the size of the Earth based on information available during a lunar eclipse. The diagram at the right illustrates a translation of his work. The large circle is the sun, the medium circle is the Earth and the smallest circle is the moon. When the Earth is in-between the sun and the moon it causes a lunar eclipse and measuring the size of the Earth’s shadow on the moon provided part of the information he needed to calculate its size.
Eratosthenes estimated Earth’s circumference around 240 B.C. He used a different approach, measuring the shadows cast in Alexandria and Syene to calculate their angle relative to the Sun. There is some dispute on the accuracy of his calculations as we don’t know exactly how long the units of measure were. The measurement however was relatively close to the actual size of the Earth. The Greeks were applying mathematics to theorize about the nature of their world. They held a range of beliefs about nature and the world but they were, in many cases, working to ground those beliefs in an empirical exploration of what they could reason from evidence.
Aristotle’s Elements and Cosmology
In the tradition of Plato and Empedocles before him, Aristotle argued that there were four fundamental elements, fire, air, water and earth. It is difficult for us to fully understand what this meant as today we think about matter in very different terms. In Aristotle’s system there was no such thing as void space. All space was filled with some combination of these elements.
Aristotle asserted that you could further reduce these elements into two pairs of qualities, hot and cold and wet and dry. The combination of each of these qualities resulted in the elements. These qualities can be replaced by their opposites, which in this system become how change happens on Earth. For example, when heated, water seemingly turns steam which looks like air.
The Elements in Aristotle’s Cosmic Model
In Aristotle’s Cosmology, each of these four elements (earth, water, fire and air) had a weight. Earth was the heaviest, water less so, and air and fire the lightest. According to Aristotle the lighter substances moved away from the center of the universe and the heaver elements settled into the center. While these elements attempted to sort themselves out, to achieve this order, most of experience involved mixed entities.
While we have seen earth, fire, air and water, everything else in the world in this system was understood as a mixture of these elements. In this perspective, transition and change in our world resulted from the mixing of the elements. For Aristotle the terrestrial is a place of birth and death, based in these elements. The heavens are a separate realm governed by their own rules.
The Wandering and Fixed Stars in the Celestial Region
In contrast to the terrestrial, the celestial region of the heavens had a fundamentally different nature. Looking at the night sky the ancient Greeks found two primary kinds of celestial objects; the fixed stars and the wandering stars. Think of the night’s sky. Most of the visible objects appear to move at exactly the same speed and present themselves in exactly the same arrangement night after night. These are the fixed stars. They appear to move all together. Aside from these were a set of nine objects that behaved differently, the moon, the sun and the planets Mercury, Venus, Mars, Saturn and Jupiter each moved according to a different system. For the Greeks these were the wandering stars.
In this system the entire universe was part of a great sphere. This sphere was split into two sections, an outer celestial realm and an inner terrestrial one. The dividing line between the two was the orbit of the moon. While the earth was a place of transition and flux, the heavens were unchanging. Aristotle posited that there was a fifth substance, the quintessence, that was what the heavens were made of, and that the heavens were a place of perfect spherical motion.
The Unchanging Celestial Region
In Aristotle’s words, “In the whole range of time past, so far as our inherited records reach, no change appears to have taken place either in the whole scheme of the outermost heaven or in any of its proper parts.” It’s important to keep in mind that in Aristotle’s time there simply were not extensive collections of observational evidence. Things that looked like they were moving in the heavens, like comets, were not problematic in this model because they could be explained as occurring in the terrestrial realm.
This model of the heavens came with an underlying explanation. The celestial spheres were governed by a set of movers responsible for the motion of the wandering stars. Each of these wandering stars was thought to have an “unmoved mover” the entity that makes it move through the heavens. For many of the Greeks this mover could be understood as the god corresponding to any given entity in the heavens.
Ptolemy’s Circles on Circles
Claudius Ptolemy (90-168) created a wealth of astronomical knowledge from his home in Alexandria, Egypt. Benefiting from hundreds of years of observation from the time of Hipparchus and Eudoxus, as well as a set of astronomical data collected by the Babylonians, Ptolemy developed a system for predicting the motion of the stars that was published in his primary astronomical work, Almagest. Ptolemy’s success at synthesizing and refining ideas and improvements in astronomy helped make his Almagest so popular that earlier works fell out of circulation. Translated into Arabic and Latin the Almagest became the primary astronomy text for the next thousand years.
The Almagest is filled with tables. In this sense the book is a tool one can use to predict the locations of the stars Compared to earlier astronomy the book is much more focused on serving as a useful tool than as presenting a system for describing the nature of the heavens. Trying to accurately predict the place of the stars over time resulted in creating a much more complicated model.
The Ptolemaic Model
By the time of Ptolemy Greek astronomers had proposed adding circles on the circular orbits of the wandering stars (the planets, the moon and the sun) to explain their motion. These circles on circles are called epicycles. In the Greek tradition, the heavens were a place of perfect circular motion, so the way to account for perfection was with the addition of circles. This resulted in disorienting illustrations.
To escape the complicated nature of this extensive number of circles, Ptolomy added a series of new concepts. To accurately describe planetary motion, he needed to use eccentric circles. With the eccentric circle the center of the planets orbit would not be Earth but would instead be some other point. Ptolemy then needed to put the epicycles on another set of circles called deferents. So the planets moved on circles that moved on circular orbits. Ptolomy also needed to introduce equants, a tool that enabled the planets to move at different speeds as they moved around these circles. The resulting model was complex, but it had extensive predictive power.
Ptolemy and Aristotle’s Cosmic Legacy
Ptolemy came to represent a mathematical tradition, one focused on developing mathematical models with predictive power. Aristotle came to be known for putting forward the physical model of the heavens. Ptolemy was also interested in deploying his model of the heavens to describe its physical reality. However, his most important work was the mathematical models and data he used for predicting the motion of heavenly bodies. For a long time his name was synonymous with the model of the heavens.
|
June 28, 2019 is the centenary of the Treaty of Versailles. The notorious treaty, signed by Germany on June 28, 1919, was the most important of the peace treaties that ended the First World War. Although each defeated nation signed its own treaty, the entire settlement is often called the Treaty of Versailles. The war and peace settlement caused a century of statism and war. On the centenary of the Treaty of Versailles, it is appropriate to reassess its consequences and its lessons for the future.
In early January, 1919, delegates from Britain, France, Italy, and the United States congregated in Paris. Initially, the Allies’ plan was to have a preliminary conference amongst themselves to decide on the peace terms to offer Germany. After the brief preliminary conference, the plan was to invite Germany to a full-scale peace conference to negotiate the terms.
As the Allies squabbled amongst themselves, the preliminary conference gradually developed into the full-scale conference. The Germans were not summoned to Paris until early May. And when they finally arrived, they were never allowed to negotiate the terms of the treaty. Thus, the Treaty of Versailles was a dictated treaty, not a negotiated treaty. The Economic Consequen... Best Price: $6.01 Buy New $6.47 (as of 04:55 EST - Details)
It was unfair for the Allies to dictate the terms of the treaty to Germany. This violated precedents set after the French lost the Napoleonic wars and the Franco-Prussian war. Breaking these precedents was bound to breed contempt for the treaty in Germany. The various military clauses, reparation clauses, and territorial clauses added fuel to the fire.
The military clauses of the treaty disarmed Germany. But the German disarmament was supposed to be part of general European disarmament sponsored by the League of Nations. While the Germans were disarmed by the treaty, the Allies did not fulfill their promise to disarm. This was unfair, and the Allies’ broken promise infuriated German public opinion.
German reparations received too much attention in the aftermath of the war. This was due in large part to John Maynard Keynes and his highly influential book The Economic Consequences of the Peace. Keynes’s entire analysis of the reparations problem was fatally flawed, for it was based on a mercantilist theory called the transfer problem. Economic science shows that the transfer problem does not exist.1 Keynes’s mercantilist analysis of the reparations problem incited the Germans, and has misled economists and historians ever since.
Economic historians have increasingly recognized that the reparations imposed on Germany were not as onerous as Keynes insisted.2 However, there was one feature of the settlement that was certainly unwise and unfair: the Allies did not fix the amount of reparations in the treaty. The Germans were forced to sign a blank check, and this allowed them to complain that they had been condemned to indefinite slave labor. Keynes was in charge of preparing the British Treasury’s position on reparations, and it was his tragic idea to not fix the amount of reparations in the treaty.3
The reparations section of the treaty included Article 231 – the infamous war-guilt clause. Article 231 required Germany to accept responsibility for starting the war. This clause was unfair, because Germany was not solely responsible for the war. All the major European powers share the blame. Interestingly, Keynes and John Foster Dulles were the lead draftsmen of Article 231.4
An unfortunate consequence of Keynes’s book was to shift attention away from the territorial problems with the treaty. As the title of his book suggests, Keynes’s criticisms of the treaty were entirely economic; he never criticized the territorial clauses. The really significant problems with the peace settlement were the imperialistic territorial clauses.
Prior to the conference, the Allied leaders assured the world that the peace would be based on the principle of national self-determination. Their actions proved otherwise. At the conference, the Allies imperialistically carved up the world and created new but unsustainable nation states with government coercion.
The Versailles Peace Treaty meant Germany would lose 13 percent of its territory and 10 percent of its population. In the West, Germany lost Alsace-Loraine to France. The residents of Alsace-Loraine were never allowed to decide for themselves whether they wanted to rejoin France. The Allies’ violation of the principle of national self-determination in Alsace-Loraine embittered the Germans.
In the east, the Allied leaders imperialistically carved up Germany (along with Russia and Austria-Hungary) to recreate Poland, a country that had ceased to exist in 1795. To resurrect Poland, millions of Germans, Lithuanians, Byelorussians, and Ukrainians were denied the right to self-determination. Moreover, President Wilson had promised the reborn Poland access to the sea, so the Allies created the Polish Corridor. The Polish Corridor cut off East Prussia from Germany, and it contained the German city of Danzig. The Polish Corridor meant Poland was loathed in Germany.
The Allies created the new nation of Czechoslovakia by carving up Germany and the shattered Austro-Hungarian Empire. The Slovaks, Poles, Ukrainians, and Hungarians in the new Czechoslovakia were denied the right to self-determination. This was also the case for three million Germans in the Sudetenland. In Germany, Czechoslovakia was a constant reminder of the Allies’ bad faith on the issue of self-determination.
Finally, the Treaty of Versailles prohibited the unification of Germany and Austria. This prohibition violated the Austrians’ right to national self-determination. The territorial clauses regarding Alsace-Loraine, Poland, Czechoslovakia, and Austria meant the right to self-determination had been unfairly denied to millions of Germans and other Europeans.
Any person interested in the preservation and proliferation of humanity must despise Adolph Hitler and his vile Nazi regime. But it must be asked: how could a lunatic like Hitler rise to power in Germany? The answer is the First World War and the Treaty of Versailles. The German population thought the treaty was unfair, and they wanted someone to oppose it. The treaty created the platform for Hitler’s rise to power. For this reason, the Treaty of Versailles must be considered a major cause of the Second World War.
Italy was the weakest power at the outbreak of the Great War. Although Italy had a defense treaty with Germany and Austria-Hungary before the war, these nations had not been attacked. This meant Italy was not obliged to enter the war on the side of the Central Powers. Instead, Italy remained neutral and opportunistically shopped around for territory.
The Central Powers offered Italy territory, but Italy wanted a piece of Austro-Hungarian territory called South Tyrol. This would give Italy a defensible border in the Alps. For obvious reasons, the Austro-Hungarians refused to promise their own territory to Italy. The Allies, by contrast, were happy to promise Italy this territory. On April 26, 1915, Britain, France, and Russia signed the imperialistic Treaty of London with Italy. With this agreement, Italy would get South Tyrol, Austrian Littoral, and Northern Dalmatia when the Allies won the war.
At the peace conference, President Wilson refused to uphold the Treaty of London. This enraged the Italians. The Italians were granted South Tyrol, and this pushed the Italian border up to the Alps. But the Allies broke their promises on Austrian Littoral and Northern Dalmatia. Rather, the Allies gave this territory to the new nation of Yugoslavia.
The Allies’ failure to uphold the Treaty of London left many Italians feeling cheated. The Allied victory became known as the “Mutilated Victory” in Italy. The Allies’ broken promises to the Italians at the Paris Peace Conference launched Benito Mussolini and his fascist regime into power in 1922.
The Paris Peace Conference had a significant impact on Asia. Prior to the war, the Western powers exercised imperialistic control over most of Asia. Britain controlled modern day India, Pakistan, Bangladesh, and Myanmar, along with Hong Kong and Singapore. France controlled modern day Vietnam, Cambodia, and Laos. Russia took territory in Northern China; the Netherlands had Indonesia; the United States controlled the Philippines.
China and Japan were the only real significant independent Asian countries before the war. But China was on the verge of losing its independence. The British, French, Germans, and Russians all exercised control of territory in China via concessions. But Japan was China’s greatest threat. Before the war, Japan had already taken Taiwan and Korea from China, and they controlled Manchuria. The First World War halted European expansion in China, but this left Japan unchecked to wrest away more Chinese territory.
Japan entered the war on the side of the Allies. Before the war, Germany had control of islands in the Pacific, and Japan took these islands during the war. Germany had controlled a territory in China called Shandong, and Japan seized these German concessions. The Japanese made secret imperialistic agreements with Britain during the war that would allow them to keep the German Pacific islands and Shandong.
The Japanese had two demands at the Paris Peace Conference. First, they wanted the Allies to uphold their secret wartime agreements on Shandong and the German Pacific islands. Second, they wanted a racial equality clause. In other words, the Japanese desired a clause in the peace treaty stating that Europeans and Asians are of equal racial quality.
Like the Japanese, the Chinese joined the war on the side of the Allies. The Chinese believed that contributing to the war effort would prevent the Europeans and Japanese from expanding in China after the war. China had one major demand at the Paris Peace Conference: Shandong. This territory had a large Chinese population, and it was culturally important because it was the birthplace of Confucius. But the British had promised Shandong to the Japanese. The Allies found themselves in a dilemma over Shandong.
According to the principle of national self-determination, the Chinese had the proper claim to Shandong. Sadly, the principle of imperialism prevailed over the principle of national self-determination. The peacemakers upheld their imperialistic wartime agreement and granted Shandong to the Japanese instead of the Chinese.
Why grant Shandong to the Japanese? The Allies felt they had two choices: 1) meet the Japanese demand for a racial equality clause, or 2) reject the racial equality clause, but appease the Japanese with Shandong. The peacemakers, especially Wilson, could not overcome their racist inclinations, and they refused the Japanese demand for a racial equality clause.
The Japanese received Shandong, but they were disillusioned by the treaty. The Allies’ refusal to include the racial equality clause made the Japanese lose faith in the West. The treaty bred Japanese militarism and put Japan on the road to war with the United States.
Incredibly, the ramifications in China were just as disastrous. The peacemakers’ decision on Shandong ignited protests in Beijing on May 4, 1919. These protests evolved into the May Fourth Movement, and the Chinese communist movement was born. Seventy-five million Chinese died as a result of Mao Zedong’s communist regime. Eventually, Communism would spread to Korea and Vietnam, resulting in the Korean War, the Vietnam War, and ongoing tension between North Korea and the West.
The Middle East
The geopolitical problems in the Middle East over the last century have their roots in the First World War and the Paris Peace Conference. Before the war, the British controlled Egypt, the French controlled Algeria and Tunisia, and Italy controlled Libya. By contrast, the Ottoman Empire controlled modern day Iraq, Syria, Lebanon, Israel, Jordan, and Saudi Arabia.
At the beginning of the war, the Allied Powers made secret imperialistic agreements to carve up the Ottoman Empire. For their part in the war, the Russians demanded the expansion of its territory down to Constantinople. This was a sensitive issue for the British, for it would give Russia influence in the Mediterranean waters around the Suez Canal. India was the jewel in the crown of the British Empire, and Britain shuttled their troops to India through the canal. In short, the Suez Canal was essential to Britain’s imperial control over India.
The British would agree to the Russian demand for Constantinople, but only if Britain was guaranteed certain territory around the Suez Canal. This territory included modern day Egypt, Israel, Jordan, and southern Iraq. British control of these territories would create a bubble around the Suez Canal and thereby secure the British route to India.
The Sykes-Picot Agreement of January 3, 1916 was a secret treaty between Britain and France to carve up the Middle East after the war. France would get the territory of modern day Syria, Lebanon, and northern Iraq, while Britain would get the territory of modern day Egypt, Israel, Jordan, and southern Iraq. Later, the Russians and Italians assented to the treaty.
Unfortunately, the British made promises to the Arabs inside the Ottoman Empire that were incompatible with Sykes-Picot. The British and French controlled territory in India and North Africa that contained vast numbers of Muslims. The British and French were terrified that the Turkish sultan would incite Muslim revolts inside their empires. They were desperate to knock the Ottomans out of the war to avoid an Islamic uprising.
British military campaigns against the Ottomans were disastrous. As a result, the British devised a plan to destabilize the Ottoman Empire from within. The plan was to have the Arabs revolt against the Turks. The British promised Hussein, the Sharif of Mecca, that he would be made king of a unified and independent Arab state after the war if he revolted against the Turks. Hussein agreed. His son Faisal, advised by Lawrence of Arabia, led the Arab revolt against the Turks. The Arab revolt played an important role in the collapse of the Ottoman Empire.
At the peace conference, the British broke their promise to establish a unified and independent Arab state. Instead, they created a handful of new nations in the Middle East that would be dominated by Britain and France. In 1921, the French created the Kingdom of Syria. The British convinced the French to make Faisal the ruler of Syria, but he had no independence. He was exiled by the French in July 1920. The French created the state of Lebanon in 1920, and transferred territory from Syria to Lebanon. This act of imperialism still irritates Syrians today.
The imperialistic Sykes-Picot Agreement led to the creation of Iraq. According to Sykes-Picot, the British would get Baghdad and Basra, while the French would get Mosul in the North. The British realized the importance of oil much earlier than the French, and the British suspected there was oil in Mosul. In 1918, the British convinced the French to relinquish their claim to Mosul. In this way, the British took control of the entire territory that is now Iraq. The British formed the Kingdom of Iraq in 1921, and Faisal was made king.
The British promise for an independent Arab state was incompatible with Sykes-Picot. But British promises to European Jews further complicated the situation in the region. On November 2, 1917, the British government issued the Balfour Declaration — a public statement supporting a homeland for the Jewish people in Palestine. Czarist Russia was the great anti-Semitic power before the war, and this made many Jews reluctant to support the Allies. The English believed the Balfour Declaration would foster Jewish support of the Allies and weaken Jewish support for the Central Powers.
Sykes-Picot gave the British control of Palestine. In 1921, the British carved Jordan out of Palestine and made Hussein’s son Abdullah king. However, the creation of Jordan infuriated both the Jews and the Arabs. On the one hand, the Jews thought the Balfour Declaration granted them the entire territory of Palestine. Thus, they viewed the creation of Jordan as a broken promise. On the other hand, the Arabs in Palestine revolted at the idea of a Jewish homeland in their territory. There has been tension between the Jews and Muslims in the region ever since.
The war and peace treaties resulted in the creation of new and unsustainable nation states in the Middle East. For those living in the Middle East, even the names Iraq, Syria, Lebanon, Jordan, Israel, etc., are constant reminders that Britain and France betrayed the Arabs. In the end, British and French imperialism in the Middle East caused a century of turmoil in the region.
A Lesson for the Future
The First World War and the Paris Peace Conference led to Nazism in Germany, fascism in Italy, militarism in Japan, extremism in the Middle East, and communism in Russia, China, Korea, and Vietnam. What must be learned from the war and peace settlement? Here is the most important lesson: the free market economy is the only way to lasting world peace.
The war was caused by Europe’s imperialistic intervention in foreign trade. In the decades before the war, there was a massive drive by the European powers to expand their empires. This put the European powers on a collision course. Why the imperial expansion? The European powers did not allow other powers to trade freely in their empires. For this reason, the European powers viewed imperial expansion as the only way to gain new markets for their goods. Europe’s rejection of the principle of free trade was the fundamental cause of the First World War.
The peacemakers at the Paris Peace Conference could not establish a durable peace, because they refused to renounce the imperialistic system that caused the war in the first place. During the war, the British and French made imperialistic agreements to carve up the globe after the war. At the peace conference, they used the peace treaties to enshrine British and French imperialism. The Allies used the treaty to fortify their empires, and the result was a century of war.
Government intervention in the free market economy is the fundamental cause of all modern wars. Government intervention in the domestic economy makes it impossible for domestic producers to compete with foreign producers. To level the playing field, government must enact protectionist measures to shield hobbled domestic producers from foreign competition. Any government that intervenes in the domestic economy must inevitably embrace protectionism. And protectionism leads to conflict and war.
Free trade between nations is required to maintain world peace. But free trade between nations is impossible if nations intervene domestically. External free trade requires internal free trade. The path to lasting world peace starts with the free market economy at home. The great lesson from the First World War and Treaty of Versailles is this: the free market economy at home and abroad is the only way to establish durable peace between nations.
1. As Ludwig von Mises writes, “There is no such thing as a ‘transfer’ problem.” Omnipotent Government (Indianapolis: Liberty Fund, 2011), p. 241. Even Robert Skidelsky, Keynes’s greatest living defender, had to admit: “If we stick to the pure theory of the matter, Keynes was wrong.” John Maynard Keynes: Economist as Savior (New York: Viking, 1992 ) p. 311. For a critique of The Economic Consequences of the Peace, see “Keynes and the First World War,” E.W. Fuller and R.C. Whitten, Libertarian Papers (vol. 9, no. 1, 2017), available at: http://libertarianpapers.org/fuller-whitten-keynes-first-world-war/
2. Richard Davenport-Hines writes, “Economic historians now tend to believe … that Germany could have afforded to pay the stipulated reparations, which were not as irrational as Keynes claimed.” Universal Man (New York: Basic Books, 2015), p. 119. Also see Mises, Omnipotent Government (Indianapolis: Liberty Fund, 2011), pp. 236-41.
3. Charles Hession writes, “when the conference became bogged down on the amount of reparations to be demanded of the defeated nation, it was his suggestion that the exact sum be left undetermined.” John Maynard Keynes (New York: Macmillan, 1984), p. 147. For documentation, see https://mises.org/wire/keynes-and-versailles-treatys-infamous-article-231
4. According to Donald Moggridge, “The significant draftsman of the clause were Keynes and John Foster Dulles.” Maynard Keynes: An Economist’s Biography (New York: Routledge, 1992), pp. 308, 331, 346. For documentation, see https://mises.org/wire/keynes-and-versailles-treatys-infamous-article-231
Note: The views expressed on Mises.org are not necessarily those of the Mises Institute.
|
The Long Depression was a worldwide price and economic recession, beginning in 1873 and running either through March 1879, or 1896, depending on the metrics used. It was most severe in Europe and the United States, which had been experiencing strong economic growth fueled by the Second Industrial Revolution in the decade following the American Civil War. The episode was labeled the "Great Depression" at the time, and it held that designation until the Great Depression of the 1930s. Though a period of general deflation and a general contraction, it did not have the severe economic retrogression of the Great Depression.
It was most notable in Western Europe and North America, at least in part because reliable data from the period is most readily available in those parts of the world. The United Kingdom is often considered to have been the hardest hit; during this period it lost some of its large industrial lead over the economies of continental Europe. While it was occurring, the view was prominent that the economy of the United Kingdom had been in continuous depression from 1873 to as late as 1896 and some texts refer to the period as the Great Depression of 1873–1896, with financial and manufacturing losses reinforced by a long recession in the UK agricultural sector.
In the United States, economists typically refer to the Long Depression as the Depression of 1873–1879, kicked off by the Panic of 1873, and followed by the Panic of 1893, book-ending the entire period of the wider Long Depression. The U.S. National Bureau of Economic Research dates the contraction following the panic as lasting from October 1873 to March 1879. At 65 months, it is the longest-lasting contraction identified by the NBER, eclipsing the Great Depression's 43 months of contraction. In the United States, from 1873 to 1879, 18,000 businesses went bankrupt, including 89 railroads. Ten states and hundreds of banks went bankrupt. Unemployment peaked in 1878, long after the initial financial panic of 1873 had ended. Different sources peg the peak U.S. unemployment rate anywhere from 8.25% to 14%.
The period preceding the depression was dominated by several major military conflicts and a period of economic expansion. In Europe, the end of the Franco-Prussian War yielded a new political order in Germany, and the £200 million indemnity imposed on France led to an inflationary investment boom in Germany and Central Europe.[page needed] New technologies in industry such as the Bessemer converter were being rapidly applied; railroads were booming.[page needed] In the United States, the end of the Civil War and a brief post-war recession (1865–1867) gave way to an investment boom, focused especially on railroads on public lands in the Western United States - an expansion funded greatly[weasel words] by foreign investors.[page needed]
In 1873, during a decline in the value of silver—exacerbated by the end of the German Empire's production of thaler coins—the US government passed the Coinage Act of 1873 in April. This essentially ended the bimetallic standard of the United States, forcing it for the first time onto a pure gold standard. This measure, referred to by its opponents as "the Crime of 1873" and the topic of William Jennings Bryan's Cross of Gold speech in 1896, forced a contraction of the money supply in the United States. It also drove down silver prices further, even as new silver mines were being established in Nevada, which stimulated mining investment but increased supply as demand was falling. Silver miners arrived at US mints, unaware of the ban on production of silver coins, only to find their product no longer welcome. By September, the US economy was in a crisis, deflation causing banking panics and destabilizing business investment, climaxing in the Panic of 1873.
The Panic of 1873 has been described as "the first truly international crisis".[page needed] The optimism that had been driving booming stock prices in central Europe had reached a fever pitch, and fears of a bubble culminated in a panic in Vienna beginning in April 1873. The collapse of the Vienna Stock Exchange began on May 8, 1873, and continued until May 10, when the exchange was closed; when it was reopened three days later, the panic seemed to have faded, and appeared confined to Austria-Hungary.[page needed] Financial panic arrived in the Americas only months later on Black Thursday, September 18, 1873, after the failure of the banking house of Jay Cooke and Company over the Northern Pacific Railway. The Northern Pacific railway had been given 40 million acres (160,000 km2) of public land in the Western United States and Cooke sought $100,000,000 in capital for the company; the bank failed when the bond issue proved unsalable, and was shortly followed by several other major banks. The New York Stock Exchange closed for ten days on September 20.[page needed]
The financial contagion then returned to Europe, provoking a second panic in Vienna and further failures in continental Europe before receding. France, which had been experiencing deflation in the years preceding the crash, was spared financial calamity for the moment, as was Britain.[page needed]
Some[who?] have argued the depression was rooted in the 1870 Franco-Prussian War that devastated the French economy and, under the Treaty of Frankfurt, forced that country to make large war reparations payments to Germany. The primary cause of the price depression in the United States was the tight monetary policy that the United States followed to get back to the gold standard after the Civil War. The U.S. government was taking money out of circulation to achieve this goal, therefore there was less available money to facilitate trade. Because of this monetary policy the price of silver started to fall causing considerable losses of asset values; by most accounts, after 1879 production was growing, thus further putting downward pressure on prices due to increased industrial productivity, trade and competition.
In the US the speculative nature of financing due to both the greenback, which was paper currency issued to pay for the Civil War and rampant fraud in the building of the Union Pacific Railway up to 1869 culminated in the Crédit Mobilier scandal. Railway overbuilding and weak markets collapsed the bubble in 1873. Both the Union Pacific and the Northern Pacific lines were central to the collapse; another railway bubble was the Railway Mania in the United Kingdom.
Because of the Panic of 1873, governments depegged their currencies, to save money. The demonetization of silver by European and North American governments in the early 1870s was certainly a contributing factor. The US Coinage Act of 1873 was met with great opposition by farmers and miners, as silver was seen as more of a monetary benefit to rural areas than to banks in big cities. In addition, there were US citizens who advocated the continuance of government-issued fiat money (United States Notes) to avoid deflation and promote exports. The western US states were outraged—Nevada, Colorado, and Idaho were huge silver producers with productive mines, and for a few years mining abated. Resumption of silver dollar coinage was authorized by the Bland–Allison Act of 1878. The resumption of the US government buying silver was enacted in 1890 with the Sherman Silver Purchase Act.
Monetarists believe that the 1873 depression was caused by shortages of gold that undermined the gold standard, and that the 1848 California Gold Rush, 1886 Witwatersrand Gold Rush in South Africa and the 1896–99 Klondike Gold Rush helped alleviate such crises. Other analyses have pointed to developmental surges (see Kondratiev wave), theorizing that the Second Industrial Revolution was causing large shifts in the economies of many states, imposing transition costs, which may also have played a role in causing the depression.
Like the later Great Depression, the Long Depression affected different countries at different times, at different rates, and some countries accomplished rapid growth over certain periods. Globally, however, the 1870s, 1880s, and 1890s were a period of falling price levels and rates of economic growth significantly below the periods preceding and following.
Between 1870 and 1890, iron production in the five largest producing countries more than doubled, from 11 million tons to 23 million tons, steel production increased twentyfold (half a million tons to 11 million tons), and railroad development boomed. But at the same time, prices in several markets collapsed - the price of grain in 1894 was only a third what it had been in 1867, and the price of cotton fell by nearly 50 percent in just the five years from 1872 to 1877, imposing great hardship on farmers and planters. This collapse provoked protectionism in many countries, such as France, Germany, and the United States, while triggering mass emigration from other countries such as Italy, Spain, Austria-Hungary, and Russia. Similarly, while the production of iron doubled between the 1870s and 1890s, the price of iron halved.
Many countries experienced significantly lower growth rates relative to what they had experienced earlier in the 19th century and to what they experienced afterwards:
In the late 1870s the economic situation in Chile deteriorated. Chilean wheat exports were outcompeted by production in Canada, Russia and Argentina and Chilean copper was largely replaced in international markets by copper from the United States and Spain. Income from silver mining in Chile also dropped. Aníbal Pinto, president of Chile in 1878, expressed his concerns the following way:
If a new mining discovery or some novelty of that sort does not come to improve the actual situation, the crisis that has long been felt, will worsen— Aníbal Pinto, president of Chile, 1878.
This "mining discovery" came, according to historians Gabriel Salazar and Julio Pinto, into existence through the conquest of Bolivian and Peruvian lands in the War of the Pacific. It has been argued that economic situation and the view of new wealth in the nitrate was the true reason for the Chilean elite to go into war with its neighbors.
France's experience was somewhat unusual. Having been defeated in the Franco-Prussian War, the country was required to pay £200 million in reparations to the Germans and was already reeling when the 1873 crash occurred. The French adopted a policy of deliberate deflation while paying off the reparations.
While the United States resumed growth for a time in the 1880s, the Paris Bourse crash of 1882 sent France careening into depression, one which "lasted longer and probably cost France more than any other in the 19th century". The Union Générale, a French bank, failed in 1882, prompting the French to withdraw three million pounds from the Bank of England and triggering a collapse in French stock prices.
The financial crisis was compounded by diseases impacting the wine and silk industries French capital accumulation and foreign investment plummeted to the lowest levels experienced by France in the latter half of the 19th century. After a boom in new investment banks after the end of the Franco-Prussian War, the destruction of the French banking industry wrought by the crash cast a pall over the financial sector that lasted until the dawn of the 20th century. French finances were further sunk by failing investments abroad, principally in railroads and buildings. The French net national product declined over the ten years from 1882 to 1892.
A ten-year tariff war broke out between France and Italy after 1887, damaging Franco-Italian relations which had prospered during Italian unification. As France was Italy's biggest investor, the liquidation of French assets in the country was especially damaging.
The Russian experience was similar to the US experience - three separate recessions, concentrated in manufacturing, occurred in the period (1874–1877, 1881–1886, and 1891–1892), separated by periods of recovery.
The United Kingdom, which had previously experienced crises every decade since the 1820s, was initially less affected by this financial crisis, even though the Bank of England kept interest rates as high as 9 percent in the 1870s.
The 1878 failure of the City of Glasgow Bank in Scotland arose through a combination of fraud and speculative investments in Australian and New Zealand companies (agriculture and mining) and in American railroads.
Building on an 1870 reform, and the 1879 famine, thousands of Irish tenant farmers affected by depressed producer prices and high rents launched the Land War in 1879, which resulted in the reforming Irish Land Acts.
|Industry||% decline in output|
|Iron and steel||45%|
In the United States, the Long Depression began with the Panic of 1873. The National Bureau of Economic Research dates the contraction following the panic as lasting from October 1873 to March 1879. At 65 months, it is the longest-lasting contraction identified by the NBER, eclipsing the Great Depression's 43 months of contraction. Figures from Milton Friedman and Anna Schwartz show net national product increased 3 percent per year from 1869 to 1879 and real national product grew at 6.8 percent per year during that time frame. However, since between 1869 and 1879 the population of the United States increased by over 17.5 percent, per capita NNP growth was lower. Following the end of the episode in 1879, the U.S. economy would remain unstable, experiencing recessions for 114 of the 253 months until January 1901.
The dramatic shift in prices mauled nominal wages - in the United States, nominal wages declined by one-quarter during the 1870s, and as much as one-half in some places, such as Pennsylvania. Although real wages had enjoyed robust growth in the aftermath of the American Civil War, increasing by nearly a quarter between 1865 and 1873, they stagnated until the 1880s, posting no real growth, before resuming their robust rate of expansion in the later 1880s. The collapse of cotton prices devastated the already war-ravaged economy of the southern United States. Although farm prices fell dramatically, American agriculture continued to expand production.
Thousands of American businesses failed, defaulting on more than a billion dollars of debt. One in four laborers in New York were out of work in the winter of 1873–1874 and, nationally, a million became unemployed.
The sectors which experienced the most severe declines in output were manufacturing, construction, and railroads. The railroads had been a tremendous engine of growth in the years before the crisis, yielding a 50% increase in railroad mileage from 1867 to 1873. After absorbing as much as 20% of US capital investment in the years preceding the crash, this expansion came to a dramatic end in 1873; between 1873 and 1878, the total amount of railroad mileage in the United States barely increased at all.
The Freedman's Savings Bank was a typical casualty of the financial crisis. Chartered in 1865 in the aftermath of the American Civil War, the bank had been established to advance the economic welfare of America's newly emancipated freedmen. In the early 1870s, the bank had joined in the speculative fever, investing in real estate and unsecured loans to railroads; its collapse in 1874 was a severe blow to African-Americans.
Various administrations have closed in gloom and weakness ... but no other has closed in such paralysis and discredit as (in all domestic fields) did Grant's. The President was without policies or popular support. He was compelled to remake his Cabinet under a grueling fire from reformers and investigators; half its members were utterly inexperienced, several others discredited, one was even disgraced. The personnel of the departments was largely demoralized. The party that autumn appealed for votes on the implicit ground that the next Administration would be totally unlike the one in office. In its centennial year, a year of deepest economic depression, the nation drifted almost rudderless.
Recovery began in 1878. The mileage of railroad track laid down increased from 2,665 mi (4,289 km) in 1878 to 11,568 in 1882. Construction began recovery by 1879; the value of building permits increased two and a half times between 1878 and 1883, and unemployment fell to 2.5% in spite of (or perhaps facilitated by) high immigration.
The recovery, however, proved short-lived. Business profits declined steeply between 1882 and 1884. The recovery in railroad construction reversed itself, falling from 11,569 mi (18,619 km) of track laid in 1882 to 2,866 mi (4,612 km) of track laid in 1885; the price of steel rails collapsed from $71/ton in 1880 to $20/ton in 1884. Manufacturing again collapsed - durable goods output fell by a quarter again. The decline became another financial crisis in 1884, when multiple New York banks collapsed; simultaneously, in 1883–1884, tens of millions of dollars of foreign-owned American securities were sold out of fears that the United States was preparing to abandon the gold standard. This financial panic destroyed eleven New York banks, more than a hundred state banks, and led to defaults on at least $32 million worth of debt. Unemployment, which had stood at 2.5% between recessions, surged to 7.5% in 1884–1885, and 13% in the northeastern United States, even as immigration plunged in response to deteriorating labor markets.
This second recession led to further deterioration of farm prices. Kansas farmers burned their own corn in 1885 because it was worth less than other fuels such as coal or wood. The country began to recover in 1885.
The period preceding the Long Depression had been one of increasing economic internationalism, championed by efforts such as the Latin Monetary Union, many of which then were derailed or stunted by the impacts of economic uncertainty. The extraordinary collapse of farm prices provoked a protectionist response in many nations. Rejecting the free trade policies of the Second Empire, French president Adolphe Thiers led the new Third Republic to protectionism, which led ultimately to the stringent Méline tariff in 1892. Germany's agrarian Junker aristocracy, under attack by cheap, imported grain, successfully agitated for a protective tariff in 1879 in Otto von Bismarck's Germany over the protests of his National Liberal Party allies. In 1887, Italy and France embarked on a bitter tariff war. In the United States, Benjamin Harrison won the 1888 US presidential election on a protectionist pledge.
As a result of the protectionist policies enacted by the world's major trading nations, the global merchant marine fleet posted no significant growth from 1870 to 1890 before it nearly doubled in tonnage in the prewar economic boom that followed. Only the United Kingdom and the Netherlands remained committed to low tariffs.
In 1874, a year after the 1873 crash, the United States Congress passed legislation called the Inflation Bill of 1874 designed to confront the issue of falling prices by injecting fresh greenbacks into the money supply. Under pressure from business interests, President Ulysses S. Grant vetoed the measure. In 1878, Congress overrode President Rutherford B. Hayes's veto to pass the Silver Purchase Act, a similar but more successful attempt to promote "easy money".
The United States endured its first nationwide strike in 1877, the Great Railroad Strike of 1877. This led to widespread unrest and often violence in many major cities and industrial hubs including Baltimore, Philadelphia, Pittsburgh, Reading, Saint Louis, Scranton, and Shamokin.
The Long Depression contributed to the revival of colonialism leading to the New Imperialism period, symbolized by the scramble for Africa, as the western powers sought new markets for their surplus accumulated capital. According to Hannah Arendt's The Origins of Totalitarianism (1951), the "unlimited expansion of power" followed the "unlimited expansion of capital".
In the United States, beginning in 1878, the rebuilding, extending, and refinancing of the western railways, commensurate with the wholesale giveaway of water, timber, fish, minerals in what had previously been Indian territory, characterized a rising market. This led to the expansion of markets and industry, together with the robber barons of railroad owners, which culminated in the genteel 1880s and 1890s. The Gilded Age was the outcome for the few rich. The cycle repeated itself with the Panic of 1893, another huge market crash.
In the United States, the National Bureau of Economic Analysis dates the recession through March 1879. In January 1879, the United States returned to the gold standard which it had abandoned during the Civil War; according to economist Rendigs Fels, the gold standard put a floor to the deflation, and this was further boosted by especially good agricultural production in 1879. The view that a single recession lasted from 1873 to 1896 or 1897 is not supported by most modern reviews of the period. It has even been suggested that the trough of this business cycle may have occurred as early as 1875. In fact, from 1869 to 1879, the US economy grew at a rate of 6.8% for real net national product (NNP) and 4.5% for real NNP per capita. Real wages were flat from 1869 to 1879, while from 1879 to 1896, nominal wages rose 23% and prices fell 4.2%.
Irving Fisher believed that the Panic of 1873 and the severity of the contractions which followed it could be explained by debt and deflation and that a financial panic would trigger catastrophic deleveraging in an attempt to sell assets and increase capital reserves; that selloff would trigger a collapse in asset prices and deflation, which would in turn prompt financial institutions to sell off more assets, only to further deflation and strain capital ratios. Fisher believed that had governments or private enterprise embarked on efforts to reflate financial markets, the crisis would have been less severe.
David Ames Wells (1890) wrote of the technological advancements during the period 1870–90, which included the Long Depression. Wells gives an account of the changes in the world economy transitioning into the Second Industrial Revolution in which he documents changes in trade, such as triple expansion steam shipping, railroads, the effect of the international telegraph network and the opening of the Suez Canal. Wells gives numerous examples of productivity increases in various industries and discusses the problems of excess capacity and market saturation.
Wells' opening sentence:
The economic changes that have occurred during the last quarter of a century - or during the present generation of living men - have unquestionably been more important and more varied than during any period of the world's history.
Other changes Wells mentions are reductions in warehousing and inventories, elimination of middlemen, economies of scale, the decline of craftsmen, and the displacement of agricultural workers. About the whole 1870–90 period Wells said:
Some of these changes have been destructive, and all of them have inevitably occasioned, and for a long time yet will continue to occasion, great disturbances in old methods, and entail losses of capital and changes in occupation on the part of individuals. And yet the world wonders, and commissions of great states inquire, without coming to definite conclusions, why trade and industry in recent years has been universally and abnormally disturbed and depressed.
Wells notes that many of the government inquiries on the "depression of prices" (deflation) found various reasons such as the scarcity of gold and silver. Wells showed that the US money supply actually grew over the period of the deflation. Wells noted that deflation lowered the cost of only goods that benefited from improved methods of manufacturing and transportation. Goods produced by craftsmen and many services did not decrease in value, and the cost of labor actually increased. Also, deflation did not occur in countries that did not have modern manufacturing, transportation, and communications.
Nobel laureate economist Milton Friedman, author of A Monetary History of the United States, on the other hand, blamed this prolonged economic crisis on the imposition of a new gold standard, part of which he referred to by its traditional name, The Crime of 1873. Additionally, Friedman pointed to the expansion of the gold supply through Gold cyanidation as a contributor to the recovery. This forced shift into a currency whose supply was limited by nature, unable to expand with demand, caused a series of economic and monetary contractions that plagued the entire period of the Long Depression. Murray Rothbard, in his book History of Money and Banking of the United States, argues that the long depression was only a misunderstood recession since real wages and production were actually increasing throughout the period. Like Friedman, he attributes falling prices to the resumption of a deflationary gold standard in the U.S. after the Civil War.
Most economic historians see this period as negative for the most industrial nations. Many argue that most of the stagnation was caused by a monetary contraction caused by abandonment of the bimetallic standard, in favor of a new fiat gold standard, starting with the Coinage Act of 1873.
Other economic historians have complained about the characterization of this period as a "depression" because of conflicting economic statistics that cast doubt on this interpretation. They note it saw a relatively large expansion of industry, of railroads, of physical output, of net national product, and of real per capita income.
As economists Milton Friedman and Anna J. Schwartz have noted, the decade from 1869 to 1879 saw a growth of 3 percent per year in money national product, an outstanding real national product growth of 6.8 percent per year, and a rise of 4.5 percent per year in real product per capita. Even the alleged "monetary contraction" never took place, the money supply increasing by 2.7 percent per year. From 1873 through 1878, before another spurt of monetary expansion, the total supply of bank money rose from $1.964 billion to $2.221 billion, a rise of 13.1 percent, or 2.6 percent per year. In short, it was a modest but definite rise, not a contraction. Although per-capita nominal income declined very gradually from 1873 to 1879, that decline was more than offset by a gradual increase over the course of the next 17 years.
Furthermore, real per capita income either stayed approximately constant (1873–1880; 1883–1885) or rose (1881–1882; 1886–1896), so the average consumer appears to have been considerably better off at the end of the "depression" than before. Studies of other countries where prices also tumbled, including the US, Germany, France, and Italy, reported more markedly positive trends in both nominal and real per capita income figures. Profits generally were also not adversely affected by deflation, although they declined (particularly in Britain) in industries struggling against superior, foreign competition. Furthermore, some economists argue a falling general price level is not inherently harmful to an economy and cite the economic growth of the period as evidence. As economist Murray Rothbard has stated:
Unfortunately, most historians and economists are conditioned to believe that steadily and sharply falling prices must result in depression: hence their amazement at the obvious prosperity and economic growth during this era. For they have overlooked the fact that in the natural course of events, when government and the banking system do not increase the money supply very rapidly, freemarket capitalism will result in an increase of production and economic growth so great as to swamp the increase of money supply. Prices will fall, and the consequences will be not depression or stagnation, but prosperity (since costs are falling, too), economic growth, and the spread of the increased living standard to all the consumers.
Accompanying the overall growth in real prosperity was a marked shift in consumption from necessities to luxuries: by 1885, "more houses were being built, twice as much tea was being consumed, and even the working classes were eating imported meat, oranges, and dairy produce in quantities unprecedented". The change in working class incomes and tastes was symbolized by "the spectacular development of the department store and the chain store".
Prices certainly fell, but almost every other index of economic activity - output of coal and pig iron, tonnage of ships built, consumption of raw wool and cotton, import and export figures, shipping entries and clearances, railway freight clearances, joint-stock company formations, trading profits, consumption per head of wheat, meat, tea, beer, and tobacco - all of these showed an upward trend.
A large part at least of the deflation commencing in the 1870s was a reflection of unprecedented advances in factor productivity. Real unit production costs for most final goods dropped steadily throughout the 19th century and especially from 1873 to 1896. At no previous time had there been an equivalent "harvest of technological advances... so general in their application and so radical in their implications". That is why, notwithstanding the dire predictions of many eminent economists, Britain did not end up paralyzed by strikes and lockouts. Falling prices did not mean falling money wages. Instead of inspiring large numbers of workers to go on strike, falling prices were inspiring them to go shopping.
|Wikimedia Commons has media related to Long Depression.|
RECENT ECONOMIC CHANGES AND THEIR EFFECT ON DISTRIBUTION OF WEALTH AND WELL BEING OF SOCIETY WELLS.
|
Prerequisite : RSA Algorithm
Why RSA decryption is slow ?
RSA decryption is slower than encryption because while doing decryption, private key parameter ” d ” is necessarily large. Moreover the parameters – ” p and q ” are two very large Prime Numbers.
Given integers c, e, p and q, find m such that c = pow(m, e) mod (p * q) (RSA decryption for weak integers).
- RSA is a public key encryption system used for secure transmission of messages.
- RSA involves four steps typically :
(1) Key generation
(2) Key distribution
- Message Encryption is done with a “Public Key”.
- Message Decryption is done with a “Private Key” – parameters (p, q, d) generated along with Public Key.
- The private key is known only to the user, and the public key can be made known to anyone who wishes to send an encrypted message to the person with the corresponding private key.
- A public key which is depicted by two parameters n (modulus) and e (exponent). The modulus is a product of two very large prime numbers (p and q as shown below). Decryption of this message would require the user to factorize n into two prime factors(the main reason, RSA is secure), and then find the modular inverse of e, wherein the difficult task lies.
- A text message is first converted to the respective decimal value, which is the parameter ‘m’ which we are finding below. We now encrypt this message by doing c = pow(m, e) mod (p * q), where c is the encrypted text.
In this code, we exploit weak modulus and exponent values to try and crack the encryption by generating the private key by finding the values of p, q and d. In these examples, we will try to find d given p and q.
Here, in this example we are using small values of p and q but in actual we use very large values of p and q to make our RSA system secure.
Input : c = 1614 e = 65537 p = 53 q = 31 Output : 1372 Explanation : We calculate c = pow(m, e)mod(p * q). Insert m = 1372. On calculating, we get c = 1614. Input : c = 3893595 e = 101 p = 3191 q = 3203 Output : 6574839 Explanation : As shown above, if we calculate pow(m, e)mod(p * q) with m = 6574839, we get c = 3893595
Normally, we can find the value of m by following these steps:
(1) Calculate the modular inverse of e.
We can make use of the following equation, d = e^(-1)(mod lambda(n)),
where lambda is the Carmichael Totient function.
Read: Carmichael function
(2) Calculate m = pow(c, d)mod(p * q)
(3) We can perform this calculation faster by using the Chinese Remainder Theorem,
as defined below in the function
Further reading on Chinese Remainder Theorem can be done at
Below is the Python implementation of this approach :
# Function to find the gcd of two # integers using Euclidean algorithm def gcd(p, q): if q == 0: return p return gcd(q, p % q) # Function to find the # lcm of two integers def lcm(p, q): return p * q / gcd(p, q) # Function implementing extended # euclidean algorithm def egcd(e, phi): if e == 0: return (phi, 0, 1) else: g, y, x = egcd(phi % e, e) return (g, x - (phi // e) * y, y) # Function to compute the modular inverse def modinv(e, phi): g, x, y = egcd(e, phi) return x % phi # Implementation of the Chinese Remainder Theorem def chineseremaindertheorem(dq, dp, p, q, c): # Message part 1 m1 = pow(c, dp, p) # Message part 2 m2 = pow(c, dq, q) qinv = modinv(q, p) h = (qinv * (m1 - m2)) % p m = m2 + h * q return m # Driver Code p = 9817 q = 9907 e = 65537 c = 36076319 d = modinv(e, lcm(p - 1, q - 1)) """ pow(a, b, c) calculates a raised to power b modulus c, at a much faster rate than pow(a, b) % c Furthermore, we use Chinese Remainder Theorem as it splits the equation such that we have to calculate two values whose equations have smaller moduli and exponent value, thereby reducing computing time. """ dq = pow(d, 1, q - 1) dp = pow(d, 1, p - 1) print chineseremaindertheorem(dq, dp, p, q, c)
This article is contributed by Deepak Srivatsav. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
- Computer Network | Redistribution
- Computer Network | Simple network management protocol (SNMP)
- Computer Network | Cisco router basic commands
- Computer Network | Access networks
- Computer Network | Password authentication protocol (PAP)
- DFD Based Threat modelling | Set 1
- DFD Based Threat Modelling | Set 2
- Computer Network | Denial of Service DDoS attack
- Computer Network | Cisco router modes
- Computer Network | Types of MANET
|
Bullying is both a psychological and a sociological phenomenon that occurs among human beings who live, work, and study together. Although certain individuals are more likely to bully (psychological), the context in which they exist (sociological) can also contribute toward an environment in which bullying is more acceptable. Young people are rarely bullied because they are perceived to be the same as everyone else; they are often bullied because they stand out in their environment for being different in some way from their peers. This reality points to the need for schools to promote an understanding and appreciation for diversity among young people. Research shows that levels of bullying and other forms of discrimination decrease when young people are provided with an opportunity to reflect on difference as a positive aspect of life. The current geopolitical context challenges us more than ever before to promote inclusion and address discrimination as a form of bullying in our schools, workplaces, and wider society.
Bullying as a form of human aggression occurs in organizations, workplaces, voluntary groups, universities, and particularly in schools (Lutgen-Sandvik, et al. 2016; Datta, et al. 2016; Lapidot-Lefler and Dolev-Cohen 2015; and McGuire 2013). Bullying is a problem that transcends social boundaries and can result in devastating psychological and emotional trauma, such as low self-esteem, poor academic performance, depression, and, in some cases, violence, and suicidal behavior (Smith 2014). There is no universally agreed definition for bullying. However, bullying is generally understood as a form of aggressive behavior characterized by three core elements: (1) it is aggressive behavior or intentional “harm doing,” (2) is carried out repeatedly and over time, and (3) occurs in an interpersonal relationship characterized by an imbalance of power. In addition, the bullying behavior often occurs without apparent provocation, and negative actions can be carried out by physical contact, words, intentional exclusion from a group, or other ways, such as making faces or mean gestures (Del Barrio, et al. 2008). When assessing behavior that might be considered to be bullying, it is important to evaluate the extent to which intent, repetition, and an imbalance of power exist; otherwise, no matter how conflictual or aggressive the encounter is, it may not be considered to be bullying. However, some researchers argue that a one-off event can also be considered to be bullying if there is a threat that it may be repeated (Gladden, et al. 2014). Hamarus and Kaikkonen 2008 argues, depending on which definition of bullying is used, only acts that conform to a particular definition are identified and labeled as bullying, thus excluding whole aspects of conflict and aggression that also may occur. The definition and related self-report questionnaire in the Olweus Bullying Prevention Program (created by Dan Olweus in 2007) has been used extensively in international research. However, this approach has been critiqued from the point of view that it does not account for nuances in different cultural meanings and terminology associated with the concept of bullying. For example, Smith, et al. 2002 alludes to the fact that the term ijime is used in Japan as a bullying equivalent, but the term implies less of a focus on physical violence and a greater emphasis on social manipulation. This has implications for those who are being asked to create policies and procedures that include definitions of bullying. The core challenge here for organizations, workplaces, and schools is how to develop a workable definition that sufficiently covers various types of aggressive behavior. This article examines and outlines the phenomenon of bullying by exploring historical developments that have led to the current theoretical approach to the problem as it occurs in early-21st-century society. It considers both the psychological and sociological aspects of bullying while suggesting strategies for prevention and intervention in the educational and workplace settings.
Datta, Pooja, Dewey Cornell, and Francis Huang. 2016. Aggressive attitudes and prevalence of bullying bystander behavior in middle school. Psychology in the Schools 53.8: 804–816.
This article explores the reinforcement of bullying behavior augmented by pro-aggressive attitudes and the role of bystander students. The findings suggest this can be counteracted by implementing anti-bullying programs that promote positive bystander intervention.
Del Barrio, Christina, Elena Martín, Ignacio Montero, Héctor Gutiérrez, Ángela Barrios, and María José de Dios. 2008. Bullying and social exclusion in Spanish secondary schools: National trends from 1999 to 2006. International Journal of Clinical and Health Psychology 8:657–677.
This paper reports on a national longitudinal study on bullying in schools in Spain.
Gladden, R. M., A. M. Vivolo-Kantor, M. E. Hamburger, and C. D. Lumpkin. 2014. Bullying surveillance among youths: Uniform definitions for public health and recommended data elements, Version 1.0. Atlanta: National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, and the United States Department of Education.
This report provides background on the problem of bullying, including what is known in the early 2010s about the public health burden of bullying and the need for a uniform definition of bullying.
Hamarus, Päivi, and Pauli Kaikkonen. 2008. School bullying as a creator of pupil peer pressure. Educational Research 50.4: 333–345.
This article explores the phenomenon of school bullying within a social and cultural framework, which also provides a new way of understanding pupils’ social relationships.
Lapidot-Lefler, Noam, and Michal Dolev-Cohen. 2015. Comparing cyberbullying and school bullying among school students: Prevalence, gender, and grade level differences. Social Psychology of Education 18.1: 1–16.
This article compares the phenomenon of cyberbullying and school bullying. The findings of the research are based on the study of 465 junior high and high school students in Israel and reveals that cyberbullying is less common than school bullying.
Lutgen-Sandvik, Pamela, Jacqueline N. Hood, and Ryan P. Jacobson. 2016. The impact of positive organizational phenomena and workplace bullying on individual outcomes. Journal of Managerial Issues 28.1–2: 30–49.
This article examines in tandem positive organization scholarship (POS) and counterproductive workplace behavior (CWB) with two goals. The first looks at positive interpersonal work experiences; the second explores the effects of negative behavior, such as bullying, on positive organizational features.
McGuire, Lian. 2013. Third-level student experiences of bullying in Ireland. In Bullying in Irish education. Edited by Mona O’Moore and Paul Stevens, 100–123. Cork, Ireland: Cork Univ. Press.
This chapter presents the first definitive study of bullying in higher education in Ireland. It explores the various types of bullying, where it can take place, and by whom, offering strategies for prevention and intervention.
Olweus Bullying Prevention Program. Hazeldene Foundation.
This resource was developed by Dan Olweus in 2007 and has been used throughout the world as a form of bullying prevention and intervention in schools. It relies on a specific definition and a self-reporting questionnaire.
Smith, Peter K. 2014. Understanding school bullying: Its nature and prevention strategies. London: SAGE.
In chapter 5 of this book, Who is at risk, and what are the effects?, the author outlines who is at risk of being bullied and what the possible effects are on them.
Smith, Peter K., Helen Cowie, Ragnar F. Olafsson, et al. 2002. Definitions of bullying: A comparison of terms used, and age and gender differences, in a fourteen-country international comparison. Child Development 73.4: 1119–1133.
This article explores how children understand the meaning of the English word “bullying” in fourteen different countries. Twenty-five cartoon stick-figures of social situations between peers were shown to eight- and fourteen-year-old students in order to investigate whether each country’s native terms equaled the English equivalent.
Users without a subscription are not able to see the full content on this page. Please subscribe or login.
- Academic Achievement
- Academic Audit for Universities
- Academic Freedom and Tenure in the United States
- Action Research in Education
- Adjuncts in Higher Education in the United States
- Administrator Preparation
- Advanced Placement and International Baccalaureate Courses
- African American Racial Identity and Learning
- Alternative Certification Programs for Educators
- Alternative Schools
- American Indian Education
- Art Education
- Assessing School Leader Effectiveness
- Assessment, Behavioral
- Assessment, Educational
- Assessment in Early Childhood Education
- Athletics, College
- Augmented Reality in Education
- Beginning-Teacher Induction
- Bilingual Education and Bilingualism
- Blended Learning
- Case Study in Education Research
- Changing Professional and Academic Identities
- Character Education
- Children’s and Young Adult Literature
- Children's Beliefs about Intelligence
- Children's Rights in Early Childhood Education
- Civic and Social Engagement of Higher Education
- Classroom Management
- College Admissions in the United States
- Community Relations
- Comparative Education
- Computer-Based Testing
- Counseling in Schools
- Critical Race Theory
- Crossborder and Transnational Higher Education
- Culturally Responsive Leadership
- Culturally Responsive Pedagogies
- Culturally Responsive Teacher Education in the United Stat...
- Curriculum Design
- Data Collection in Educational Research
- Data-driven Decision Making in the United States
- Deaf Education
- Desegregation and Integration
- Development, Moral
- Digital Age Teacher, The
- Distance Learning
- Distributed Leadership
- Doctoral Education and Training
- Early Childhood Education and Care (ECEC) in Denmark
- Early Childhood Education and Development in Mexico
- Early Childhood Education in Australia
- Early Childhood Education in China
- Early Childhood Education in Europe
- Early Childhood Education in Sub-Saharan Africa
- Early Childhood Education in Sweden
- Early Childhood Education Pedagogy
- Early Childhood Education Policy
- Early Childhood Education, The Arts in
- Early Childhood Mathematics
- Early Childhood Teacher Education
- Early Childhood Teachers in Aotearoa New Zealand
- Early Years Professionalism and Professionalization Polici...
- Economics of Education
- Education For Children with Autism
- Education Leadership, Empirical Perspectives in
- Education Reform and School Change
- Educational Statistics for Longitudinal Research
- Emotional and Behavioral Disorders
- Epistemic Beliefs
- Equity, Ethnicity, Diversity, and Excellence in Education
- Ethics and Education
- Ethics of Teaching
- Ethnic Studies
- Evidence-Based Communication Assessment and Intervention
- Family and Community Partnerships in Education
- Family Day Care
- Federal Government Programs and Issues
- Finance, Education
- Financial Aid
- Formative Assessment
- Gender and Achievement
- Gifted Education
- Global Mindedness and Global Citizenship Education
- Global University Rankings
- Governance, Education
- Grounded Theory
- Growth of Effective Mental Health Services in Schools in t...
- Higher Education and Globalization
- Higher Education and the Developing World
- Higher Education Governance
- Higher Education in China
- Higher Education in the United States, Historical Evolutio...
- Higher Education, International Issues in
- Higher Education Management
- Higher Education Policy
- Higher Education Research
- Higher Education Student Assessment
- High-stakes Testing
- History of Education in the United States
- Inclusive Education
- Indigenous Education in a Global Context
- Inservice Teacher Education
- Integrating Art across the Curriculum
- Intensive Interventions for Children and Adolescents with ...
- Intersectionality and Education
- Knowledge Development in Early Childhood
- Leadership Development, Coaching and Feedback for
- Leadership Training
- Learning Difficulties
- Learning, Lifelong
- Learning, Multimedia
- Learning Strategies
- Legal Matters and Education Law
- LGBT Youth in Schools
- Linguistic Diversity
- Linguistically Inclusive Pedagogy
- Literacy Development and Language Acquisition
- Literacy, Multiple Documents
- Literature Reviews
- Mathematics Instruction and Interventions for Students wit...
- Mathematics Teacher Education
- Measurement in Education in the United States
- Meta-Analysis and Research Synthesis in Education
- Methodologies for Conducting Education Research
- Mindfulness, Learning, and Education
- Mixed Methods Research
- Multivariate Research Methodology
- Museums, Education, and Curriculum
- Music Education
- Narrative Research in Education
- Numeracy Education
- Online Education
- Pedagogy of Teacher Education, A
- Performance Objectives and Measurement
- Performance-based Research Assessment in Higher Education
- Performance-based Research Funding
- Phenomenology in Educational Research
- Philosophy of Education
- Physical Education
- Politics of Education
- Portable Technology for Special Education
- Pre-Service Teacher Education
- Problem Solving
- Professional Development
- Professional Learning Communities
- Program Evaluation
- Programs and Services for Students with Emotional or Behav...
- Psychology Learning and Teaching
- Psychometric Issues in the Assessment of English Language ...
- Qualitative Data Analysis Techniques
- Qualitative Research Design
- Quantitative Research Designs in Educational Research
- Race and Affirmative Action in Higher Education
- Reading Education
- Refugee and New Immigrant Learners
- Religion in Elementary and Secondary Education in the Unit...
- Researcher Development and Skills Training within the Cont...
- Response to Intervention
- Restorative Practices
- School Accreditation
- School Choice
- School Culture
- School Improvement through Inclusive Education
- School Reform
- Schools, Private and Independent
- School-Wide Positive Behavior Support
- Science Education
- Secondary to Postsecondary Transition Issues
- Self-Regulated Learning
- Self-Study of Teacher Education Practices
- Severe Disabilities
- Single Salary Schedule
- Single-sex Education
- Single-Subject Research Design
- Social Context of Education
- Social Justice
- Social Network Analysis
- Social Pedagogy
- Social Science and Education Research
- Social Studies Education
- Sociology of Education
- Standards-Based Education
- Statistical Assumptions
- Student Access, Equity, and Diversity in Higher Education
- Student Assignment Policy
- Student Engagement in Tertiary Education
- Student Participation
- Student Voice in Teacher Development
- Teacher Evaluation and Teacher Effectiveness
- Teacher Preparation
- Teacher Training and Development
- Teacher Unions and Associations
- Teaching Critical Thinking
- Technologies, Teaching, and Learning in Higher Education
- Technology Education in Early Childhood
- Technology, Educational
- Technology-based Assessment
- The Bologna Process
- The Regulation of Standards in Higher Education
- Theories of Educational Leadership
- Tracking and Detracking
- Transitions in Early Childhood Education
- University Faculty Roles and Responsibilities in the Unite...
- Using Ethnography in Educational Research
- Value of Higher Education for Students and Other Stakehold...
- Vocational and Technical Education
- Wellness and Well-Being in Education
- Women's and Gender Studies
|
Decision trees are a powerful machine learning algorithm that can be used for both classification and regression tasks. They are easy to understand and interpret, and they can be used to build complex models without the need for feature engineering.
Once you have trained a decision tree model, you can use it to make predictions on new data. However, it can also be helpful to plot the decision tree to better understand how it works and to identify any potential problems.
In this blog post, we will show you how to plot decision trees in R using the
rpart.plot packages. We will also provide an extensive example using the iris data set and explain the code blocks in simple to use terms.
Load the libraries
Split the data into training and test sets
set.seed(123) train_index <- sample(1:nrow(iris), size = 0.7 * nrow(iris)) train <- iris[train_index, ] test <- iris[-train_index, ]
Train a decision tree model
tree <- rpart(Species ~ ., data = train, method = "class")
Plot the decision tree
rpart.plot(tree, main = "Decision Tree for the Iris Dataset")
The output of the
rpart.plot() function is a tree diagram that shows the decision rules of the model. The root node of the tree is at the top, and the leaf nodes are at the bottom. Each node is labeled with the feature that is used to split the data at that node, and the value of the split. The leaf nodes are labeled with the predicted class for the data that reaches that node.
Interpreting the decision tree
To interpret the decision tree, start at the root node and follow the branches down to a leaf node. The leaf node that you reach is the predicted class for the data that you started with.
For example, if you have a new iris flower with a sepal length of 5.5 cm and a petal length of 2.5 cm, you would start at the root node of the decision tree. At the root node, the feature that is used to split the data is petal length. Since the petal length of the new flower is greater than 2.45 cm, you would follow the right branch down to the next node. At the next node, the feature that is used to split the data is sepal length. Since the sepal length of the new flower is greater than 5.0 cm, you would follow the right branch down to the leaf node. The leaf node that you reach is labeled “versicolor”, so the predicted class for the new flower is versicolor.
Trying it on your own
Now that you have learned how to plot decision trees in R, try it out on your own. You can use the iris data set or your own data set.
To get started, load the
rpart.plot libraries and load your data set. Then, split the data into training and test sets. Train a decision tree model using the
rpart() function. Finally, plot the decision tree using the
Once you have plotted the decision tree, take some time to interpret it. Try to understand how the model makes predictions and to identify any potential problems. You can also try to improve the model by adding or removing features or by changing the hyperparameters of the
Plotting decision trees is a great way to better understand how they work and to identify any potential problems. It is also a helpful way to communicate the results of a decision tree model to others.
In this blog post, we showed you how to plot decision trees in R using the
rpart.plot packages. We also provided an extensive example using the iris data set and explained the code blocks in simple to use terms.
We encourage you to try plotting decision trees on your own data sets. It is a great way to learn more about decision trees and to improve your machine learning skills.
|
- Iowa Core Mathematics (.pdf)
- Iowa Core Mathematics (.doc)
- Iowa Core Mathematics with DOK (.pdf)
- Iowa Core Mathematics with DOK (.doc)
Resources to support Iowa Core Mathematics
In Grade 5, instructional time should focus on three critical areas: (1) developing fluency with addition and subtraction of fractions, and developing understanding of the multiplication of fractions and of division of fractions in limited cases (unit fractions divided by whole numbers and whole numbers divided by unit fractions); (2) extending division to 2-digit divisors, integrating decimal fractions into the place value system and developing understanding of operations with decimals to hundredths, and developing fluency with whole number and decimal operations; and (3) developing understanding of volume.
- Students apply their understanding of fractions and fraction models to represent the addition and subtraction of fractions with unlike denominators as equivalent calculations with like denominators. They develop fluency in calculating sums and differences of fractions, and make reasonable estimates of them. Students also use the meaning of fractions, of multiplication and division, and the relationship between multiplication and division to understand and explain why the procedures for multiplying and dividing fractions make sense. (Note: this is limited to the case of dividing unit fractions by whole numbers and whole numbers by unit fractions.)
- Students develop understanding of why division procedures work based on the meaning of base-ten numerals and properties of operations. They finalize fluency with multi-digit addition, subtraction, multiplication, and division. They apply their understandings of models for decimals, decimal notation, and properties of operations to add and subtract decimals to hundredths. They develop fluency in these computations, and make reasonable estimates of their results. Students use the relationship between decimals and fractions, as well as the relationship between finite decimals and whole numbers (i.e., a finite decimal multiplied by an appropriate power of 10 is a whole number), to understand and explain why the procedures for multiplying and dividing finite decimals make sense. They compute products and quotients of decimals to hundredths efficiently and accurately.
- Students recognize volume as an attribute of three-dimensional space. They understand that volume can be measured by finding the total number of same-size units of volume required to fill the space without gaps or overlaps. They understand that a 1-unit by 1-unit by 1-unit cube is the standard unit for measuring volume. They select appropriate units, strategies, and tools for solving problems that involve estimating and measuring volume. They decompose three-dimensional shapes and find volumes of right rectangular prisms by viewing them as decomposed into layers of arrays of cubes. They measure necessary attributes of shapes in order to determine volumes to solve real world and mathematical problems.
Grade 5 Overview
Operations and Algebraic Thinking
- Write and interpret numerical expressions.
- Analyze patterns and relationships.
Number and Operations in Base Ten
- Understand the place value system.
- Perform operations with multi-digit whole numbers and with decimals to hundredths.
Number and Operations—Fractions
- Use equivalent fractions as a strategy to add and subtract fractions.
- Apply and extend previous understandings of multiplication and division to multiply and divide fractions.
Measurement and Data
- Convert like measurement units within a given measurement system.
- Represent and interpret data.
- Geometric measurement: understand concepts of volume and relate volume to multiplication and to addition.
- Graph points on the coordinate plane to solve real-world and mathematical problems.
- Classify two-dimensional figures into categories based on their properties.
- Make sense of problems and persevere in solving them.
- Reason abstractly and quantitatively.
- Construct viable arguments and critique the reasoning of others.
- Model with mathematics.
- Use appropriate tools strategically.
- Attend to precision.
- Look for and make use of structure.
- Look for and express regularity in repeated reasoning.
|
Teach your class to solve division problems with this colourful and creative resource! E ^A^T] Addition problem solving using division ks2 subtraction 22 Multiplication 24 Problem solving using division ks2 26 Fractions. When we use division for large numbers, we work in the opposite direction to.
Nov breast cancer stage 4 case study. Then we will look at more complex bar model examples for KS2 SATs. Build your students Math Skills with these daily practice word problem worksheets!. Method. EYFS. Solve problems involving halving and sharing.
These free primary maths resources for KS1 and KS2 (around 3,000 of them) are. Division|Math|KS2|Elementary grades 3-6, ages 8-11 Cool online math games for kids. Problem Solving with Times Tables 2 (Priya Shah) DOC Incorrect Division. Apr 2016. Year 5 set 1 division problem solving sheet.
Understand. x Problem solving with challenges and simplifications: This illustrates how activities linked to. Division word problems - Y3 Two differentiated worksheets. These resources provide fun, free problem solving teaching ideas and activities for. A classic problem solving game.
Flash. Fun division worksheets piqqus com maths xolving ks2 ideas collection worksheet tes.
Deep drawing research paper
Challenging multiplication and division (KS2 resources). Remember, there are. division, fraction, decimal, ratio and more. Lesson TES: Word problems requiring 2 operations to solve (one of which is division) pitched to meet division targets of APP Trackers at levels 3a, 4c, 4b with. An extended problem solving task with a ghoulish theme!
Ge ecomagination challenge case study
Join thousands of teachers and students who use HegartyMaths every day. Aug 2014 - 5 min - Uploaded by Mr CairdA step by step guide to solving long division problems.
There are 780 calories in 6 granola bars. Division. Learning Objectives. Success criteria. Type: Tablet friendly В· KS2. Multiplication and division word problems to solve.
Good thesis for video games
All teacher-tested and approved! Completing division calculations by using remainders, fractions and decimal.
Thesis of modest proposal
Understanding multiplication and division. Name at least three plans you can use to solve math.
Literature review hints
Feb 2017. How many multiplication and division sentences can you. Find out how to use mental methods when doing multiplication with this Bitesize Primary KS2 Maths. Based on the new 2016 curriculum assessments, our new Problem Solving app is a. If the answer to a division problem is not a whole number, the leftovers are.
Why i want to become a nurse practitioner essay
Slides which encourage the use of mathematics to solve problems. Space Chase” is an interactive eleven chapter long problem solving maths adventure.. The first involves solving word problems using 3, 4 and 8 times table facts. Check out our Division worksheets hub page, with links to all of our division facts.
Children use their knowledge to solve mathematical problems and puzzles. Please use this form if you would like solvong have this math solver on your website. Division Worksheets Grade 3 Solving Division Problems Welcome to our. Solve one step problems syracuse college essay multiplication and division.
A part of basic arithmetic, long division is a method of solving and finding the remainder for division problems that involve numbers with at least two digits.
|
◖ A B O U T ◗
Solar deities play a major role in many world religions and mythologies. Worship of the Sun was central to civilizations such as the ancient Egyptians, the Inca of South America and the Aztecs of what is now Mexico. In religions such as Hinduism, the Sun is still considered a god, he is known as Surya Dev. Many ancient monuments were constructed with solar phenomena in mind; for example, stone megaliths accurately mark the summer or winter solstice (some of the most prominent megaliths are located in Nabta Playa, Egypt; Mnajdra, Malta and at Stonehenge, England); Newgrange, a prehistoric human-built mount in Ireland, was designed to detect the winter solstice; the pyramid of El Castillo at Chichén Itzá in Mexico is designed to cast shadows in the shape of serpents climbing the pyramid at the vernal and autumnal equinoxes.
The ancient Sumerians believed that the Sun was Utu, the god of justice and twin brother of Inanna, the Queen of Heaven, who was identified as the planet Venus. Later, Utu was identified with the East Semitic god Shamash. Utu was regarded as a helper-deity, who aided those in distress, and, in iconography, he is usually portrayed with a long beard and clutching a saw, which represented his role as the dispenser of justice.
From at least the Fourth Dynasty of Ancient Egypt, the Sun was worshipped as the god Ra, portrayed as a falcon-headed divinity surmounted by the solar disk, and surrounded by a serpent. In the New Empire period, the Sun became identified with the dung beetle, whose spherical ball of dung was identified with the Sun. In the form of the sun disc Aten, the Sun had a brief resurgence during the Amarna Period when it again became the preeminent, if not only, divinity for the Pharaoh Akhenaton.
The Egyptians portrayed the god Ra as being carried across the sky in a solar barque, accompanied by lesser gods, and to the Greeks, he was Helios, carried by a chariot drawn by fiery horses. From the reign of Elagabalus in the late Roman Empire the Sun's birthday was a holiday celebrated as Sol Invictus (literally "Unconquered Sun") soon after the winter solstice, which may have been an antecedent to Christmas. Regarding the fixed stars, the Sun appears from Earth to revolve once a year along the ecliptic through the zodiac, and so Greek astronomers categorized it as one of the seven planets (Greek planetes, "wanderer"); the naming of the days of the weeks after the seven planets dates to the Roman era.
In Proto-Indo-European religion, the Sun was personified as the goddess Seh'ul. Derivatives of this goddess in Indo-European languages include the Old Norse Sól, Sanskrit Surya, Gaulish Sulis, Lithuanian Saulė, and Slavic Solntse. In ancient Greek religion, the sun deity was the male god Helios, who in later times was syncretized with Apollo.
In the Bible, Malachi 4:2 mentions the "Sun of Righteousness" (sometimes translated as the "Sun of Justice"), which some Christians have interpreted as a reference to the Messiah (Christ). In ancient Roman culture, Sunday was the day of the sun god. It was adopted as the Sabbath day by Christians who did not have a Jewish background. The symbol of light was a pagan device adopted by Christians, and perhaps the most important one that did not come from Jewish traditions. In paganism, the Sun was a source of life, giving warmth and illumination to mankind. It was the center of a popular cult among Romans, who would stand at dawn to catch the first rays of sunshine as they prayed. The celebration of the winter solstice (which influenced Christmas) was part of the Roman cult of the unconquered Sun (Sol Invictus). Christian churches were built with an orientation so that the congregation faced toward the sunrise in the East.
Tonatiuh, the Aztec god of the sun, was usually depicted holding arrows and a shield and was closely associated with the practice of human sacrifice. The sun goddess Amaterasu is the most important deity in the Shinto religion, and she is believed to be the direct ancestor of all Japanese emperors.
◖ P R O P E R T I E S ◗
• Material: 14k Yellow Gold, 14k Rose Gold, 14k White Gold
• Weight: 1,29gr (for 18", ±5%)
◖ D I O N J E W E L ◗
‣ 14K REAL GOLD
‣ EXPRESS DELIVERY IN 1-3 DAYS*
‣ HANDMADE ONLY FOR YOU, NO USED JEWELRY
‣ GIFT BOX AND OTHER GIFTS
◖ P R O D U C T I O N & Q U A L I T Y ◗
‣ All of our jewelry are handmade and made to order.
‣ We use only 14K real gold. (8k or 18k too for some jewelry) We do not craft any gold filled, gold plated, or gold vermeil items over silver or other metals.
‣ We guarantee quality of our jewelry. We offer a lifetime warranty against manufacturing defects.
‣ Raw materials are coming from historical gold and jewelry market of Istanbul Grand Bazaar. The Grand Bazaar (Kapalicarsi) was constructed in 1455 as a center for local trade of clothing and jewelry. From the 17th-19th centuries, European travelers noted that Istanbul was unlike any other trade center in its variety, quality, and amounts of products and goods. We are trying to make you feel the spirit of historical market in our jewelry.
◖ S H I P P I N G & P A C K A G I N G ◗
‣ All orders will be shipped via tracked express shipping. Exceptions may apply (see FAQ)
‣ If you need your order by a certain date, please let us know in the notes section. We will do our best to speed up the process but please keep that mind it is not guaranteed.
‣ Each order comes packaged in an elegant box for gifting and an additional pouch for travels.
* Delivery time may vary depending on the cargo company and customs procedures. 95% of the orders we send are delivered within a maximum of 3 work days (for US, EU and UK) from the date of shipment. Production time is not included.
► Please see FAQ for further details about shipping duration, production, return & exchange terms, care instructions, warranty, etc.
1-5 business days
Buyers are responsible for any customs and import taxes that may apply. I'm not responsible for delays due to customs.
Just contact me within: 3 days of delivery
Ship items back to me within: 7 days of delivery
Buyers are responsible for return shipping costs. If the item is not returned in its original condition, the buyer is responsible for any loss in value.
Please contact me if you have any problems with your order.
Personal Data We Collect
We collect the following types of personal data when you place an order on our Etsy store: your name, email address, shipping address, and phone number (if provided).
How We Collect Personal Data
We collect personal data through the Etsy platform when you place an order with us.
Purpose of Collecting Personal Data
We collect your personal data to fulfill your orders and to communicate with you about your orders. We may use your email address to send you updates on the status of your order, and we may use your phone number to contact you if there are any issues with shipping or customs.
Sharing Personal Data with Third Parties
We share your personal data (excluding phone number, unless requested) with our shipping company in order to fulfill your orders. We do not share your personal data with any other third parties.
Retention of Personal Data
The retention of your personal data is determined by Etsy.
Security of Personal Data
The protection of your personal data is secured by Etsy and our shipping company's systems. We do not store your personal data on our systems.
Your GDPR Rights
As a customer, you have the right to access, modify, or delete your personal data. You can contact Etsy to exercise these rights.
UNITED STATES, CANADA, UNITED KINGDOM, AUSTRALIA
• 📦UPS Express
EUROPE, ASIA, OTHERS
• 📦UPS Express
• 📦TNT/FedEx Express
‣ Your orders are sent with the express option. The estimated arrival time is around 2-4 workdays for US, CA, EU and UK. Please consider weekends and holidays, we do not guarantee delivery time.
‣ Please note that, in rare cases, there might be some delay in customs. We are not responsible for delays in customs.
‣ Due to increased shipping fees, we are using alternative shipping methods for Asia, South America, Africa and some of EU countries to continue offering "free shipping".
‣ Specified companies, durations and express option are valid for the majority of our sales, but may vary in some cases.
‣ All our products are sent with the "Express Shipping" option for most of countries worldwide. Unfortunately, there is no way we can deliver it faster. Shipping upgrade is available for some Asia, Africa countries and Australia.
‣ Yes, we do deliver to PO BOX (for USA only) but express shipping is valid only for physical addresses. PO BOX deliveries are not guaranteed and they will be sent with a slower shipping method. We strongly advise you to get in touch with us and provide a physical address.
‣ Since the products we sell are gold jewelry, all our shipments are delivered with signature. On the other hand delivery companies may sometimes deliver the package without signature. It is not guaranteed.
‣ If we haven't shipped the product, you can change it. However, if the product has been sent, we will forward your request to the shipping company. Although it usually results positively, we cannot guarantee.
‣ Cargo companies we work with usually tries to deliver the package to you several times. We try to follow the status of all our shipments as much as possible and inform you and the transport company in case of a problem. However, contacting your local transportation office will speed up the process.
‣ Products that are not received despite all efforts are either destroyed by cargo companies or returned to us. See "Return Policy and Fees" below for details.
‣ Your gold jewelry is exposed to skin oils, perspiration, dust, makeup, and more when worn on a daily basis. Clean your jewelry with a solution of 10-parts warm water and 2-parts dish soap on a daily basis to maintain the luster.
‣ Soak your gold jewelry pieces for 3 hours before gently scrubbing them with a soft cloth. Rinse thoroughly with hot water and wipe dry with a clean towel.
‣ Bonus Tip: after this, polish your jewelry with a jewelry polishing cloth for an extra shine!
‣ Attention: Gold is a soft metal. Brushing and drying should be done gently, and don't use a paper towel or tissue to avoid scratching your jewelry.
All of our products are under our warranty for lifetime. Conditions below apply;
‣ If the stone falls off the item,
‣ If the lock/clasp breaks.
Also we will be more than happy to help you if your product requires small repairs or maintanence. Such as;
‣ If the product requires polishing or get scratched,
‣ If the product breaks or chain snaps,
*Return & re-delivery shipping costs belong to the customer.
*Any applicable customs taxes belong to the customer.
*Item must be sent back without any missing parts.
‣ Returned product must be sent by paying the shipping charges.
‣ The shipments sent must be traceable and must be sent to be delivered with signature.
‣ When you want to return/exchange the product you have bought, please contact us for details on how to make the shipment.
‣ Returned and exchanged items are subjected to 30$+10% re-stocking fee.
‣ Additional costs of ours (transportation, repair, customs tax, etc.) will be deducted.
‣ Custom & personalized orders cannot be returned and exchanged.
‣ For following lengths, items are also considered as personalized order.
* Bracelets; 6 1/2" and shorter,
* Necklaces; 15" and shorter,
* Rings; Size 4 1/2" and smaller.
We reserve rights to make changes for return conditions.
‣ Length is measured including clasp. The total distance between the beginning and the end of the product gives the full length. The usable length may be shorter due to the clasp mechanism, chain thickness, etc.
‣ Weight has ±5% error tolerance due to measurement instrument or the jewelry being manufactured by hand. Ring weights may be changed ±15% due to size.
‣ Necklace length may change 1/4" (6-7mm) due to the jewelry being manufactured by hand.
‣ Bracelet length may change 1/8" (3-4mm) due to the jewelry being manufactured by hand.
‣ Some chains may not be produced in the exact length as purchased due to each individual link size. In that case the chain will be made in closest length.
‣ Cancellations after indicated time period are subjected to 10% re-stocking fee.
‣ Any custom order cannot be cancelled, returned or exchanged after 1 hour of purchase.
‣ Personalized orders cannot be cancelled, returned or exchanged after 1 hour of purchase.
We reserve rights to make changes for conditions.
‣ We are not responsible for uncollected packages.
‣ There will be no refund for uncollected packages.
Apr 3, 2023
Jan 26, 2023
|
This course focuses on basic concepts in fluid mechanics starting from continuum physics. Topics include fundamentals of ideal fluid, governing equations of fluid motion, Euler's equation of motion, vorticity and circulation, Bernoulli's theorem, streamlines, stream function and velocity potential function. By combining lectures and exercises, the course enables students to understand and acquire the fundamentals of ideal fluid which are important for developments of real applications in mechanical engineering.
Fluid mechanics is one of the most important basic science in mechanical engineering. Therefore, this lecture is mandatory in the course of mechanical engineering and treated as minimum requirement to take ‘Practical Fluid Mechanics’ and ‘Advanced Fluid Mechanics’.
By the end of this course, students will be able to:
1) Understand and derive governing equations of ideal fluid.
2) Explain the principal theorems related to circulation and vorticity.
3) Acquire basic aspects of fundamental flow fields using Bernoulli's theorem.
4) Explain definitions of streamlines and stream function, velocity potential and complex velocity potential functions of basic flow field.
5) Explain lift and drag forces for the flow of ideal fluids over bodies.
Ideal fluid, Governing equations, Euler's equation of momentum, Vorticity and circulation, Bernoulli's theorem, Streamlines and stream function, Velocity potential function
|Intercultural skills||Communication skills||Specialist skills||Critical thinking skills||Practical and/or problem-solving skills|
The course is taught in lecture style. Exercise problems will be assigned after the 7th and 14th classes. Required learning should be completed outside of the classroom for preparation and review purposes.
|Course schedule||Required learning|
|Class 1||Continuum physics, Stress, Ideal and viscous fluids, Compressibility||Understand basic concept of fluid mechanics based on continuum physics and definition of ideal fluid|
|Class 2||Physical quantities representing flow, Lagrangian and Eulerian method, Euler's equation of continuity||Understand flow quantities and methods which are required to describe flow|
|Class 3||Euler's equation of motion, Flux of momentum, Equation of state||Understand Euler's equation of motion and flux of momentum|
|Class 4||Streamlines, Pathlines, Streaklines, Motions of fluid elements||Understand fundamental methods which describe fluid motion|
|Class 5||First integral of momentum equation, Bernoulli's theorem||Understand first integral of momentum equation and Bernoulli's theorem|
|Class 6||Applications of Bernoulli's theorem||Understand applications of Bernoulli's theorem|
|Class 7||Theorem of streamline curvature, Lagrangian vortex theorem, Vorticity and circulation, vortex tube||Understand several important theorems in fluid mechanics: theorem of streamline curvature and Lagrangian vortex theorem|
|Class 8||Kelvin's circulation theorem, Helmholtz's vortex theorem||Understand several important theorems in fluid mechanics: Kelvin's circulation theorem and Helmholtz's vortex theorem|
|Class 9||Stream function and velocity potential function||Understand stream function and velocity potential function to describe flow field|
|Class 10||Velocity potential function of a flow around the sphere in an uniform flow||Understand velocity potential function of a flow around the sphere in an uniform flow|
|Class 11||Complex velocity potential||Understand definition of complex velocity potential|
|Class 12||Complex velocity potentials of fundamental flow geometries||Understand complex velocity potentials of fundamental flow geometries|
|Class 13||Applications of complex velocity potential||Understand several applications of complex velocity potential|
|Class 14||Kutta-Joukowski theorem||Understand Kutta-Joukowski theorem to predict lift and drag forces|
|Class 15||Schwarz-Cristoffel theorem||Understand Schwarz-Cristoffel theorem to represent flow in a complex geometries|
T. Miyauchi, M. Tanahashi, H. Kobayashi, Fundamentals of Fluid Mechanics, Tokyo: Surikougakusya ISBN:978-4-86481-023-4
I. Imai, Fluid Mechnaics(first part), Tokyo: Shoukabou ISBN: 4-7853-2314-0
M. Hino, Fluid Mechanics, Tokyo: Asakura: ISBN: 4-254-20066-8 C305
JSME textbook series Fuild Mechanics, Tokyo: Maruzen ISBN: 978-4-888898-119-4 C3353
Students' knowledge of ideal fluid, and applications will be assessed.
Final exams 80%, exercise problems 20%.
Partial Differential Equations(MEC.B213.A), Vector Analysis (MEC.B214.A).
|
A hue circle (also called color wheel or color circle) is a chart of hues around a circle and a method for organizing colors systematically. Differing wavelengths of light are perceived as a continuous spectrum of colors in the following sequence: red, orange, yellow, green, blue, purple. A circular illustration of this sequence is called the hue circle.
A systematic method of notation is required to describe colors accurately. This system for displaying color is called the color order system and can be represented using a hue circle or a color solid. Today, there are several well-known systems such as the Munsell color system, Ostwald color system and PCCS (Practical Color Coordinate System). The Munsell color system consists of 10 hues comprising 5 primary hues (red, yellow, green, blue, purple) and their intermediate hues (yellow-red, yellow-green, blue-green, blue-purple, red-purple). The Ostwald color system, on the other hand, is made up of 8 primary hues (yellow, orange, red, purple, blue, blue-green, green, yellow-green) which are further divided into three groups to form 24 hues. The two colors on the opposite sides of an Ostwald’s hue circle are complementary colors which, when combined, produce achromatic colors.
* The chart shown is only an approximate display of the relationship between colors and is not an accurate recreation. Colors also appear different depending on the viewing environment.
- Munsell hue circle
- Ostwald hue circle
- PCCS hue circle
|
A short-run production function refers to that period of time, in which the installation of new plant and machinery to increase the production level is not possible. On the other hand, the Long-run production function is one in which the firm has got sufficient time to instal new machinery or capital equipment, instead of increasing the labour units.
The production function can be described as the operational relationship between the inputs and outputs, in the sense that the maximum amount of finished goods that can be produced with the given factors of production, under a particular state of technical knowledge. There are two kinds of the production function, short run production function and long run production function.
The article presents you all the differences between short run and long run production function, take a read.
Content: Short Run Production Function Vs Long Run Production Function
|Basis for Comparion||Short-run Production Function||Long-run Production Function|
|Meaning||Short run production function alludes to the time period, in which at least one factor of production is fixed.||Long run production function connotes the time period, in which all the factors of production are variable.|
|Law||Law of variable proportion||Law of returns to scale|
|Scale of production||No change in scale of production.||Change in scale of production.|
|Factor-ratio||Changes||Does not change.|
|Entry and Exit||There are barriers to entry and the firms can shut down but cannot fully exit.||Firms are free to enter and exit.|
Definition of Short Run Production Function
The short run production function is one in which at least is one factor of production is thought to be fixed in supply, i.e. it cannot be increased or decreased, and the rest of the factors are variable in nature.
In general, the firm’s capital inputs are assumed as fixed, and the production level can be changed by changing the quantity of other inputs such as labour, raw material, capital and so on. Therefore, it is quite difficult for the firm to change the capital equipment, to increase the output produced, among all factors of production.
In such circumstances, the law of variable proportion or laws of returns to variable input operates, which states the consequences when extra units of a variable input are combined with a fixed input. In short run, increasing returns are due to the indivisibility of factors and specialisation, whereas diminishing returns is due to the perfect elasticity of substitution of factors.
Definition of Long Run Production Function
Long run production function refers to that time period in which all the inputs of the firm are variable. It can operate at various activity levels because the firm can change and adjust all the factors of production and level of output produced according to the business environment. So, the firm has the flexibility of switching between two scales.
In such a condition, the law of returns to scale operates which discusses, in what way, the output varies with the change in production level, i.e. the relationship between the activity level and the quantities of output. The increasing returns to scale is due to the economies of scale and decreasing returns to scale is due to the diseconomies of scale.
stroitkzn.ru Between Short Run and Long Run Production Function
The difference between short run and long run production function can be drawn clearly as follows:
- The short run production function can be understood as the time period over which the firm is not able to change the quantities of all inputs. Conversely, long run production function indicates the time period, over which the firm can change the quantities of all the inputs.
- While in short run production function, the law of variable proportion operates, in the long-run production function, the law of returns to scale operates.
- The activity level does not change in the short run production function, whereas the firm can expand or reduce the activity levels in the long run production function.
- In short run production function the factor ratio changes because one input varies while the remaining are fixed in nature. As opposed, the factor proportion remains same in the long run production function, as all factor inputs vary in the same proportion.
- In short run, there are barriers to the entry of firms, as well as the firms can shut down but cannot exit. On the contrary, firms are free to enter and exit in the long run.
To sum up, the production function is nothing but a mathematical presentation of technological input-output relationship.
For any production function, short run simply means a shorter time period than the long run. So, for different processes, the definition of the long run and short run varies, and so one cannot indicate the two time periods in days, months or years. These can only be understood by looking whether all the inputs are variable or not.
|
In this section we are going to explain how to represent functions graphically. We will define the most basic concepts you need to know to draw graphs of functions and in other sections of the web we will learn how to represent the most important functions.
Índice de Contenidos
The Cartesian Axes
To analyze and see how a function behaves, we have to resort to its graphical representation.
To do this we need to represent the two variables of a function on coordinate axes called Cartesian axes:
- The x is represented on the horizontal axis, called the abcisas axis.
- La y (f(x)) is represented on the vertical axis, called the ordinate axis.
The axes must be graduated. It is not necessary that the two axes have the same scale.
To do this you must take into account the maximum and minimum values of each scale and gradually distribute the values on each axis.
Therefore, each point on the graph has two coordinates, its abcissa x and its ordinate y.
For example, we have the graph of this line. One of its points has the following coordinates:
- x = 2 (abcisa)
- y = 1 (ordinate)
And it is represented this way:
We know it is the point (2,1) because if we draw from the point a vertical line to the x-axis, it cuts with 2 and if we draw a horizontal line to the y-axis, it cuts with 1.
Once you have understood that each point on a graph has two values, in order to know how to represent functions graphically, we need the coordinates of a few points, of the most representative points.
How many points do we need to represent the graph?
Then it depends on each type of function. Each type of function must be given a series of points that represent the function. We will see it in the following section.
For example, the previous case is a line. To draw a line you only need 2 points. If you got the coordinates of more points, you would see that in the end you would fall into the line, so you would be wasting time. Let’s see what their table of values would look like.
We are given the following function and asked to graphically represent it:
Your table of values would be:
Your table of values would be:
This table was obtained by choosing two values of x and calculating the value of y.
As you can see, we have obtained two points: (0,0) and (2,1), which are 2 points for what the previous function goes through and are enough to draw it.
Representation of Elementary Functions
As we said before, depending on the type of function, we know that it has a series of representative points.
There are functions with a defined form, called elementary functions. That is to say, the form that this function has must be known before drawing it and that is why we know its representative points. These functions are:
- Linear function
- Affine Function
- Constant Function
- Function of second degree (parabola)
- Reverse proportionality function
- Exponential function
- Trigonometric functions
- Logarithmic function
Representation of More Complex Functions
There are other types of functions that need further study for representation, such as:
- Functions defined in pieces
- Functions higher than 2
- Rational functions
To represent these functions we must study their domain, continuity, maximums and minimums… and therefore they need more detailed explanations.
|
The discovery of liquid water percolating across the surface of Mars puts an exclamation point on what scientists have been finding during the last couple of decades: the solar system is wet. Very wet.
This has important implications for two of NASA’s most important missions: human exploration and the search for life. Here’s how:
When the Apollo astronauts landed on the moon in the late 1960s and early 1970s they found a dry, gray surface. However since then probes sent to the surface of the moon have found “significant” amounts of water locked away in the moon’s soil.
Over the eons comets striking the moon have deposited as much as 10 billion tons of water at the poles, roughly the volume of Utah’s Great Salt Lake. That’s enough fuel to launch the equivalent of a space shuttle, every day, for more than 2,000 years.
The presence of water on the moon is a potential game-changer for human exploration of the solar system. Astronauts can drink it, of course. It also could shield their habitats from radiation. Broken apart, water provides breathable oxygen and hydrogen for fuel cells. But above all else for nations seeking to explore far from home, the water is a tempting cache of fuel. Oxygen and hydrogen are the most powerful rocket fuels known to man.
NASA, working with private companies, could use the moon and its fuel as a staging point for human exploration into the solar system, because potentially it’s a lot cheaper to fill up at the moon, rather than launching all of your fuel from the Earth’s surface.
Many people in the space industry want to develop these lunar resources before sending humans further into the solar system, saying it provides a more sustainable path for exploration.
In recent decades scientists have been steadily revising their assessment of Mars as a cold, dry and dead world.
They have found ice at the poles and, increasingly, beneath the surface at more temperate areas of the planet. Now, with the news this week, there is evidence of periodic small flows of very salty water on the surface.
“All of the scientific discoveries we’re making on Mars are giving us a much better view that Mars has resources that are useful to future travelers,” said John Grunsfeld, NASA’s chief of scientific missions. “The water really is crucial.”
Like on the moon, the presence of water beneath the surface could allow astronauts to make their own rocket fuel to return home to Earth.
Unlike on the moon, the possibility of aquifers beneath the surface of Mars increases the likelihood of microbial life existing there today. The increases the urgency to explore this world for life.
The bottom line is that Mars sits at the sweet spot for NASA. It is not only an inviting target for human exploration — indeed it is the furthest we can reasonably expect humans to travel within the next 50 to 100 years — Mars also could harbor life today.
Many planetary scientists now believe Jupiter’s moon Europa is the most likely place, aside from Earth, for life to exist in the solar system.
Europa harbors vast oceans beneath its ice sheets, more water than exists on Earth. The average depth of the Pacific Ocean is about 2 miles. On Europa the oceans are up to 60 miles deep. At the inky bottom of Earth’s oceans, near volcanically active areas, life teems around hydrothermal vents. Scientists believe similar features line the bottom of Europa’s ocean.
Now NASA is planning a fly-by mission to Europa that will launch early in the 2020s. And while it’s not talking publicly about it, that mission is also very likely to include a lander that will probe the moon’s icy surface.
Further out in the solar system, Enceladus is a tiny moon of Saturn. It is notable for geysers of water at its pole.
Earlier this year scientists found evidence of a global ocean on Enceladus under an icy crust.
Although smaller than Europa, Titan offers another promising location for life to exist with water and, near its core, a source of heat.
Further out still, around Neptune, geysers on the moon Triton spew nitrogen gas into space.
This indicates the moon is volcanically active and has enough internal heat to sustain a subsurface ocean. But does one exist? We don’t know as Triton has only been visited by Voyager 2 way back in 1989.
Nevertheless there’s definitely interest among scientists in exploring this world further.
Ok, so we’re not going to find life on Pluto. And it’s difficult to imagine humans visiting the small planet during the next century, or beyond.
But come on, New Horizons found ICE MOUNTAINS on Pluto. As in mountains made out of ice.
How cool is that?
While it won’t facilitate exploration any time soon, the stunning diversity of terrain on Pluto is sure to inspire the next generation of explorers.
|
Hearing screening is a test to tell if people might have hearing loss. Hearing screening is easy and not painful. In fact, babies are often asleep while being screened. It takes a very short time — usually only a few minutes.
CDC Report: Infants with Congenital Disorders Identified Through Newborn Screening — United States, 2015–2017
- All babies should be screened for hearing loss no later than 1 month of age. It is best if they are screened before leaving the hospital after birth.
- If a baby does not pass a hearing screening, it’s very important to get a full hearing test as soon as possible, but no later than 3 months of age.
Older Babies and Children
- If you think a child might have hearing loss, ask the doctor for a hearing test as soon as possible.
- Children who are at risk for acquired, progressive, or delayed-onset hearing loss should have at least one hearing test by 2 to 2 1/2 years of age. Hearing loss that gets worse over time is known as progressive hearing loss. Hearing loss that develops after the baby is born is called delayed-onset or acquired hearing loss. Find out if a child may be at risk for hearing loss.
- If a child does not pass a hearing screening, it’s very important to get a full hearing test as soon as possible.
Full Hearing Test
All children who do not pass a hearing screening should have a full hearing test. This test is also called an audiology evaluation. An audiologist, who is an expert trained to test hearing, will do the full hearing test. In addition, the audiologist will also ask questions about birth history, ear infection and hearing loss in the family.
There are many kinds of tests an audiologist can do to find out if a person has a hearing loss, how much of a hearing loss there is, and what type it is. The hearing tests are easy and not painful.
Some of the tests the audiologist might use include:
Auditory Brainstem Response (ABR) Test or Brainstem Auditory Evoked Response (BAER) Test
Auditory Brainstem Response (ABR) or Brainstem Auditory Evoked Response (BAER) is a test that checks the brain’s response to sound. Because this test does not rely on a person’s response behavior, the person being tested can be sound asleep during the test.
ABR focuses only on the function of the inner ear, the acoustic (hearing) nerve, and part of the brain pathways that are associated with hearing. For this test, electrodes are placed on the person’s head (similar to electrodes placed around the heart when an electrocardiogram (EKG) is done), and brain wave activity in response to sound is recorded.
Otoacoustic Emissions (OAE)
Otoacoustic Emissions (OAE) is a test that checks the inner ear response to sound. Because this test does not rely on a person’s response behavior, the person being tested can be sound asleep during the test.
Behavioral Audiometry Evaluation
Behavioral Audiometry Evaluation will test how a person responds to sound overall. Behavioral Audiometry Evaluation tests the function of all parts of the ear. The person being tested must be awake and actively respond to sounds heard during the test.
Infants and toddlers are observed for changes in their behavior such as sucking a pacifier, quieting, or searching for the sound. They are rewarded for the correct response by getting to watch an animated toy (this is called visual reinforcement audiometry). Sometimes older children are given a more play-like activity (this is called conditioned play audiometry).
With the parents’ permission, the audiologist will share the results with the child’s primary care doctor and other experts, such as:
- An ear, nose and throat doctor, also called an otolaryngologist
- An eye doctor, also called an ophthalmologist
- A professional trained in genetics, also called a clinical geneticist or a genetics counselor
For more information about hearing tests, visit the American Speech-Language-Hearing Association website.
- If a parent or anyone else who knows a child well thinks the child might have hearing loss, ask the doctor for a hearing screening as soon as possible. Don’t wait!
- If the child does not pass a hearing screening, ask the doctor for a full hearing test.
- If the child is diagnosed with a hearing loss, talk to the doctor or audiologist about treatment and intervention services.
Hearing loss can affect a child’s ability to develop communication, language, and social skills. The earlier children with hearing loss start getting services, the more likely they are to reach their full potential. If you are a parent and you suspect your child has hearing loss, trust your instincts and speak with your doctor.
|
Like what you saw?
Create FREE Account and:
Chords and a Circle's Center - Concept
A chord is a line segment whose endpoints are on a circle. If a chord passes through the center of the circle, it is called a diameter. Two important facts about a circle chord are that (1) the perpendicular bisector of any chord passes through the center of a circle and (2) congruent chords are the same distance (equidistant) from the center of the circle.
Chords in the center of a circle have a special relationship but back up what's a chord? Let's refresh our memory. Well a chord is a line segment whose endpoints are on the circle. If I found the perpendicular bisector of this chord so if I took my compass and I swung arcs from both ends of that and I found the line that bisected this chord into two congruent pieces at a 90 degree angle, so let's say I do that in so this dotted line is my perpendicular bisector of that chord and no matter where I draw a chord on this circle if I find it's perpendicular bisector it will always pass through the center of the circle so that's the first key thing about a chord as relationship with the center of circle.
Let's talk about 2 congruent chords, so this is kind of a converse of what we just talked about. If I found the perpendicular bisector of these chords so if I measured the perpendicular distance from the chord to the center, so I'm going to draw a solid line here so this is the perpendicular distance because we said the shortest distance between two points is a line to perpendicular, if these chords are congruent, they will be the same distance away from the center of the circle so if I were to join two other chords and if I told you that these chords are congruent then their distance from the center of that circle measured along a perpendicular will be congruent. So using these two keys about chords and the relationship with the center will help us solve a lot of problems.
|
In physics and materials science, the Curie temperature (TC), or Curie point, is the temperature above which certain materials lose their permanent magnetic properties, to be replaced by induced magnetism. The Curie temperature is named after Pierre Curie, who showed that magnetism was lost at a critical temperature.
The force of magnetism is determined by the magnetic moment, a dipole moment within an atom which originates from the angular momentum and spin of electrons. Materials have different structures of intrinsic magnetic moments that depend on temperature; the Curie temperature is the critical point at which a material's intrinsic magnetic moments change direction.
Permanent magnetism is caused by the alignment of magnetic moments and induced magnetism is created when disordered magnetic moments are forced to align in an applied magnetic field. For example, the ordered magnetic moments (ferromagnetic, Figure 1) change and become disordered (paramagnetic, Figure 2) at the Curie temperature. Higher temperatures make magnets weaker, as spontaneous magnetism only occurs below the Curie temperature. Magnetic susceptibility above the Curie temperature can be calculated from the Curie–Weiss law, which is derived from Curie's law.
In analogy to ferromagnetic and paramagnetic materials, the Curie temperature can also be used to describe the phase transition between ferroelectricity and paraelectricity. In this context, the order parameter is the electric polarization that goes from a finite value to zero when the temperature is increased above the Curie temperature.
|Manganese bismuthide (MnBi)||630|
|Manganese antimonide (MnSb)||587|
|Chromium(IV) oxide (CrO2)||386|
|Manganese arsenide (MnAs)||318|
|Europium oxide (EuO)||69|
|Iron(III) oxide (Fe2O3)||948|
|Iron(II,III) oxide (FeOFe2O3)||858|
|Yttrium iron garnet (Y3Fe5O12)||560|
Magnetic moments are permanent dipole moments within an atom that comprise electron angular momentum and spin by the relation μl = el/2me, where me is the mass of an electron, μl is the magnetic moment, and l is the angular momentum; this ratio is called the gyromagnetic ratio.
The electrons in an atom contribute magnetic moments from their own angular momentum and from their orbital momentum around the nucleus. Magnetic moments from the nucleus are insignificant in contrast to the magnetic moments from the electrons. Thermal contributions result in higher energy electrons disrupting the order and the destruction of the alignment between dipoles.
Ferromagnetic, paramagnetic, ferrimagnetic and antiferromagnetic materials have different intrinsic magnetic moment structures. At a material's specific Curie temperature, these properties change. The transition from antiferromagnetic to paramagnetic (or vice versa) occurs at the Néel temperature, which is analogous to Curie temperature.
|Below TC||Above TC|
Ferromagnetism: The magnetic moments in a ferromagnetic material. The moments are ordered and of the same magnitude in the absence of an applied magnetic field.
Paramagnetism: The magnetic moments in a paramagnetic material. The moments are disordered in the absence of an applied magnetic field and ordered in the presence of an applied magnetic field.
Ferrimagnetism: The magnetic moments in a ferrimagnetic material. The moments are aligned oppositely and have different magnitudes due to being made up of two different ions. This is in the absence of an applied magnetic field.
Antiferromagnetism: The magnetic moments in an antiferromagnetic material. The moments are aligned oppositely and have the same magnitudes. This is in the absence of an applied magnetic field.
Materials with magnetic moments that change properties at the Curie temperatureEdit
Ferromagnetic, paramagnetic, ferrimagnetic and antiferromagnetic structures are made up of intrinsic magnetic moments. If all the electrons within the structure are paired, these moments cancel out due to their opposite spins and angular momenta. Thus, even with an applied magnetic field, these material have different properties and no Curie temperature.
A material is paramagnetic only above its Curie temperature. Paramagnetic materials are non-magnetic when a magnetic field is absent and magnetic when a magnetic field is applied. When a magnetic field is absent, the material has disordered magnetic moments; that is, the atoms are asymmetrical and not aligned. When a magnetic field is present, the magnetic moments are temporarily realigned parallel to the applied field; the atoms are symmetrical and aligned. The magnetic moments being aligned in the same direction are what causes an induced magnetic field.
For paramagnetism, this response to an applied magnetic field is positive and is known as magnetic susceptibility. The magnetic susceptibility only applies above the Curie temperature for disordered states.
Sources of paramagnetism (materials which have Curie temperatures) include:
- All atoms that have unpaired electrons;
- Atoms that have inner shells that are incomplete in electrons;
- Free radicals;
Above the Curie temperature, the atoms are excited, and the spin orientations become randomized but can be realigned by an applied field, i.e., the material becomes paramagnetic. Below the Curie temperature, the intrinsic structure has undergone a phase transition, the atoms are ordered and the material is ferromagnetic. The paramagnetic materials' induced magnetic fields are very weak compared with ferromagnetic materials' magnetic fields.
Materials are only ferromagnetic below their corresponding Curie temperatures. Ferromagnetic materials are magnetic in the absence of an applied magnetic field.
When a magnetic field is absent the material has spontaneous magnetization which is a result of the ordered magnetic moments; that is, for ferromagnetism, the atoms are symmetrical and aligned in the same direction creating a permanent magnetic field.
The magnetic interactions are held together by exchange interactions; otherwise thermal disorder would overcome the weak interactions of magnetic moments. The exchange interaction has a zero probability of parallel electrons occupying the same point in time, implying a preferred parallel alignment in the material. The Boltzmann factor contributes heavily as it prefers interacting particles to be aligned in the same direction. This causes ferromagnets to have strong magnetic fields and high Curie temperatures of around 1,000 K (730 °C).
Below the Curie temperature, the atoms are aligned and parallel, causing spontaneous magnetism; the material is ferromagnetic. Above the Curie temperature the material is paramagnetic, as the atoms lose their ordered magnetic moments when the material undergoes a phase transition.
Materials are only ferrimagnetic below their corresponding Curie temperature. Ferrimagnetic materials are magnetic in the absence of an applied magnetic field and are made up of two different ions.
When a magnetic field is absent the material has a spontaneous magnetism which is the result of ordered magnetic moments; that is, for ferrimagnetism one ion's magnetic moments are aligned facing in one direction with certain magnitude and the other ion's magnetic moments are aligned facing in the opposite direction with a different magnitude. As the magnetic moments are of different magnitudes in opposite directions there is still a spontaneous magnetism and a magnetic field is present.
Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions. The orientations of moments however are anti-parallel which results in a net momentum by subtracting their momentum from one another.
Below the Curie temperature the atoms of each ion are aligned anti-parallel with different momentums causing a spontaneous magnetism; the material is ferrimagnetic. Above the Curie temperature the material is paramagnetic as the atoms lose their ordered magnetic moments as the material undergoes a phase transition.
Antiferromagnetic and the Néel temperatureEdit
Materials are only antiferromagetic below their corresponding Néel temperature. This is similar to the Curie temperature as above the Néel Temperature the material undergoes a phase transition and becomes paramagnetic.
The material has equal magnetic moments aligned in opposite directions resulting in a zero magnetic moment and a net magnetism of zero at all temperatures below the Néel temperature. Antiferromagnetic materials are weakly magnetic in the absence or presence of an applied magnetic field.
Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions preventing thermal disorder from overcoming the weak interactions of magnetic moments. When disorder occurs it is at the Néel temperature.
The Curie–Weiss law is an adapted version of Curie's law.
The Curie–Weiss law is a simple model derived from a mean-field approximation, this means it works well for the materials temperature, T, much greater than their corresponding Curie temperature, TC, i.e. T ≫ TC; however fails to describe the magnetic susceptibility, χ, in the immediate vicinity of the Curie point because of local fluctuations between atoms.
Neither Curie's law nor the Curie–Weiss law holds for T < TC.
Curie's law for a paramagnetic material:
|χ||the magnetic susceptibility; the influence of an applied magnetic field on a material|
|M||the magnetic moments per unit volume|
|H||the macroscopic magnetic field|
|B||the magnetic field|
|C||the material-specific Curie constant|
|µ0||the permeability of free space. Note: in CGS units is taken to equal one.|
|g||the Landé g-factor|
|J(J + 1)||the eigenvalue for eigenstate J2 for the stationary states within the incomplete atoms shells (electrons unpaired)|
|µB||the Bohr Magneton|
|total magnetism||is N number of magnetic moments per unit volume|
The Curie–Weiss law is then derived from Curie's law to be:
For full derivation see Curie–Weiss law.
Approaching Curie temperature from aboveEdit
As the Curie–Weiss law is an approximation, a more accurate model is needed when the temperature, T, approaches the material's Curie temperature, TC.
Magnetic susceptibility occurs above the Curie temperature.
An accurate model of critical behaviour for magnetic susceptibility with critical exponent γ:
As temperature is inversely proportional to magnetic susceptibility, when T approaches TC the denominator tends to zero and the magnetic susceptibility approaches infinity allowing magnetism to occur. This is a spontaneous magnetism which is a property of ferromagnetic and ferrimagnetic materials.
Approaching Curie temperature from belowEdit
Magnetism depends on temperature and spontaneous magnetism occurs below the Curie temperature. An accurate model of critical behaviour for spontaneous magnetism with critical exponent β:
The critical exponent differs between materials and for the mean-field model as taken as β = 1/ where T ≪ TC.
The spontaneous magnetism approaches zero as the temperature increases towards the materials Curie temperature.
Approaching absolute zero (0 kelvins)Edit
The spontaneous magnetism, occurring in ferromagnetic, ferrimagnetic and antiferromagnetic materials, approaches zero as the temperature increases towards the material's Curie temperature. Spontaneous magnetism is at its maximum as the temperature approaches 0 K. That is, the magnetic moments are completely aligned and at their strongest magnitude of magnetism due to no thermal disturbance.
In paramagnetic materials temperature is sufficient to overcome the ordered alignments. As the temperature approaches 0 K, the entropy decreases to zero, that is, the disorder decreases and becomes ordered. This occurs without the presence of an applied magnetic field and obeys the third law of thermodynamics.
Both Curie's law and the Curie–Weiss law fail as the temperature approaches 0 K. This is because they depend on the magnetic susceptibility which only applies when the state is disordered.
Ising model of phase transitionsEdit
The Ising model is mathematically based and can analyse the critical points of phase transitions in ferromagnetic order due to spins of electrons having magnitudes of ±1/. The spins interact with their neighbouring dipole electrons in the structure and here the Ising model can predict their behaviour with each other.
This model is important for solving and understanding the concepts of phase transitions and hence solving the Curie temperature. As a result, many different dependencies that affect the Curie temperature can be analysed.
For example, the surface and bulk properties depend on the alignment and magnitude of spins and the Ising model can determine the effects of magnetism in this system.
Weiss domains and surface and bulk Curie temperaturesEdit
Materials structures consist of intrinsic magnetic moments which are separated into domains called Weiss domains. This can result in ferromagnetic materials having no spontaneous magnetism as domains could potentially balance each other out. The position of particles can therefore have different orientations around the surface than the main part (bulk) of the material. This property directly affects the Curie temperature as there can be a bulk Curie temperature TB and a different surface Curie temperature TS for a material.
This allows for the surface Curie temperature to be ferromagnetic above the bulk Curie temperature when the main state is disordered, i.e. Ordered and disordered states occur simultaneously.
The surface and bulk properties can be predicted by the Ising model and electron capture spectroscopy can be used to detect the electron spins and hence the magnetic moments on the surface of the material. An average total magnetism is taken from the bulk and surface temperatures to calculate the Curie temperature from the material, noting the bulk contributes more.
The angular momentum of an electron is either +ħ/ or −ħ/ due to it having a spin of 1/, which gives a specific size of magnetic moment to the electron; the Bohr magneton. Electrons orbiting around the nucleus in a current loop create a magnetic field which depends on the Bohr Magneton and magnetic quantum number. Therefore, the magnetic moments are related between angular and orbital momentum and affect each other. Angular momentum contributes twice as much to magnetic moments than orbital.
For terbium which is a rare-earth metal and has a high orbital angular momentum the magnetic moment is strong enough to affect the order above its bulk temperatures. It is said to have a high anisotropy on the surface, that is it is highly directed in one orientation. It remains ferromagnetic on its surface above its Curie temperature while its bulk becomes ferrimagnetic and then at higher temperatures its surface remains ferrimagnetic above its bulk Néel Temperature before becoming completely disordered and paramagnetic with increasing temperature. The anisotropy in the bulk is different from its surface anisotropy just above these phase changes as the magnetic moments will be ordered differently or ordered in paramagnetic materials.
Changing a material's Curie temperatureEdit
Composite materials, that is, materials composed from other materials with different properties, can change the Curie temperature. For example, a composite which has silver in it can create spaces for oxygen molecules in bonding which decreases the Curie temperature as the crystal lattice will not be as compact.
The alignment of magnetic moments in the composite material affects the Curie temperature. If the materials moments are parallel with each other the Curie temperature will increase and if perpendicular the Curie temperature will decrease as either more or less thermal energy will be needed to destroy the alignments.
Preparing composite materials through different temperatures can result in different final compositions which will have different Curie temperatures. Doping a material can also affect its Curie temperature.
The density of nanocomposite materials changes the Curie temperature. Nanocomposites are compact structures on a nano-scale. The structure is built up of high and low bulk Curie temperatures, however will only have one mean-field Curie temperature. A higher density of lower bulk temperatures results in a lower mean-field Curie temperature and a higher density of higher bulk temperature significantly increases the mean-field Curie temperature. In more than one dimension the Curie temperature begins to increase as the magnetic moments will need more thermal energy to overcome the ordered structure.
The size of particles in a material's crystal lattice changes the Curie temperature. Due to the small size of particles (nanoparticles) the fluctuations of electron spins become more prominent, this results in the Curie temperature drastically decreasing when the size of particles decrease as the fluctuations cause disorder. The size of a particle also affects the anisotropy causing alignment to become less stable and thus lead to disorder in magnetic moments.
The extreme of this is superparamagnetism which only occurs in small ferromagnetic particles and is where fluctuations are very influential causing magnetic moments to change direction randomly and thus create disorder.
The Curie temperature of nanoparticles are also affected by the crystal lattice structure, body-centred cubic (bcc), face-centred cubic (fcc) and a hexagonal structure (hcp) all have different Curie temperatures due to magnetic moments reacting to their neighbouring electron spins. fcc and hcp have tighter structures and as a results have higher Curie temperatures than bcc as the magnetic moments have stronger effects when closer together. This is known as the coordination number which is the number of nearest neighbouring particles in a structure. This indicates a lower coordination number at the surface of a material than the bulk which leads to the surface becoming less significant when the temperature is approaching the Curie temperature. In smaller systems the coordination number for the surface is more significant and the magnetic moments have a stronger effect on the system.
Although fluctuations in particles can be minuscule, they are heavily dependent on the structure of crystal lattices as they react with their nearest neighbouring particles. Fluctuations are also affected by the exchange interaction as parallel facing magnetic moments are favoured and therefore have less disturbance and disorder, therefore a tighter structure influences a stronger magnetism and therefore a higher Curie temperature.
Pressure changes a material's Curie temperature. Increasing pressure on the crystal lattice decreases the volume of the system. Pressure directly affects the kinetic energy in particles as movement increases causing the vibrations to disrupt the order of magnetic moments. This is similar to temperature as it also increases the kinetic energy of particles and destroys the order of magnetic moments and magnetism.
Pressure also affects the density of states (DOS). Here the DOS decreases causing the number of electrons available to the system to decrease. This leads to the number of magnetic moments decreasing as they depend on electron spins. It would be expected because of this that the Curie temperature would decrease however it increases. This is the result of the exchange interaction. The exchange interaction favours the aligned parallel magnetic moments due to electrons being unable to occupy the same space in time and as this is increased due to the volume decreasing the Curie temperature increases with pressure. The Curie temperature is made up of a combination of dependencies on kinetic energy and the DOS.
The concentration of particles also affects the Curie temperature when pressure is being applied and can result in a decrease in Curie temperature when the concentration is above a certain percent.
Orbital ordering changes the Curie temperature of a material. Orbital ordering can be controlled through applied strains. This is a function that determines the wave of a single electron or paired electrons inside the material. Having control over the probability of where the electron will be allows the Curie temperature to be altered. For example, the delocalised electrons can be moved onto the same plane by applied strains within the crystal lattice.
The Curie temperature is seen to increase greatly due to electrons being packed together in the same plane, they are forced to align due to the exchange interaction and thus increases the strength of the magnetic moments which prevents thermal disorder at lower temperatures.
Curie temperature in ferroelectric materialsEdit
In analogy to ferromagnetic and paramagnetic materials, the term Curie temperature (TC) is also applied to the temperature at which a ferroelectric material transitions to being paraelectric. Hence, TC is the temperature where ferroelectric materials lose their spontaneous polarisation as a first or second order phase change occurs. In case of a second order transition the Curie Weiss temperature T0 which defines the maximum of the dielectric constant is equal to the Curie temperature. However, the Curie temperature can be 10 K higher than T0 in case of a first order transition.
|Below TC||Above TC|
|Ferroelectric||↔ Dielectric (paraelectric)|
|Antiferroelectric||↔ Dielectric (paraelectric)|
|Ferrielectric||↔ Dielectric (paraelectric)|
|Helielectric||↔ Dielectric (paraelectric)|
Ferroelectric and dielectricEdit
Materials are only ferroelectric below their corresponding transition temperature T0. Ferroelectric materials are all pyroelectric and therefore have a spontaneous electric polarisation as the structures are unsymmetrical.
Ferroelectric materials' polarization is subject to hysteresis (Figure 4); that is they are dependent on their past state as well as their current state. As an electric field is applied the dipoles are forced to align and polarisation is created, when the electric field is removed polarisation remains. The hysteresis loop depends on temperature and as a result as the temperature is increased and reaches T0 the two curves become one curve as shown in the dielectric polarisation (Figure 5).
A heat-induced ferromagnetic-paramagnetic transition is used in magneto-optical storage media, for erasing and writing of new data. Famous examples include the Sony Minidisc format, as well as the now-obsolete CD-MO format. Curie point electro-magnets have been proposed and tested for actuation mechanisms in passive safety systems of fast breeder reactors, where control rods are dropped into the reactor core if the actuation mechanism heats up beyond the material's curie point. Other uses include temperature control in soldering irons, and stabilizing the magnetic field of tachometer generators against temperature variation.
- Pierre Curie – Biography
- Buschow 2001, p5021, table 1
- Jullien & Guinier 1989, p. 155
- Kittel 1986
- Hall & Hook 1994, p. 200
- Jullien & Guinier 1989, pp. 136–38
- Ibach & Lüth 2009
- Levy 1968, pp. 236–39
- Dekker 1958, pp. 217–20
- Levy 1968
- Fan 1987, pp. 164–65
- Dekker 1958, pp. 454–55
- Mendelssohn 1977, p. 162
- Levy 1968, pp. 198–202
- Cusack 1958, p. 269
- Hall & Hook 1994, pp. 220–21
- Palmer 2007
- Hall & Hook 1994, p. 220
- Jullien & Guinier 1989, pp. 158–59
- Jullien & Guinier 1989, pp. 156–57
- Jullien & Guinier 1989, pp. 153
- Hall & Hook 1994, pp. 205–06
- Levy 1968, pp. 201–02
- Kittel 1996, p. 444
- Myers 1997, pp. 334–45
- Hall & Hook 1994, pp. 227–28
- Kittel 1986, pp. 424–26
- Spaldin 2010, pp. 52–54
- Hall & Hook 1994, p. 225
- Mendelssohn 1977, pp. 180–81
- Mendelssohn 1977, p. 167
- Bertoldi, Bringa & Miranda 2012
- Brout 1965, pp. 6–7
- Jullien & Guinier 1989, p. 161
- Rau, Jin & Robert 1988
- Skomski & Sellmyer 2000
- Jullien & Guinier 1989, p. 138
- Hall & Hook 1994
- Hwang et al. 1998
- Paulsen et al. 2003
- López Domínguez et al. 2013
- Bose et al. 2011
- Sadoc et al. 2010
- Webster 1999
- Kovetz 1990, p. 116
- Myers 1997, pp. 404–05
- Pascoe 1973, pp. 190–91
- Webster 1999, pp. 6.55–6.56
- Takamatsu (2007). "Demonstration of Control Rod Holding Stability of the Self Actuated Shutdown System in Joyo for Enhancement of Fast Reactor Inherent Safety". Journal of Nuclear Science and Technology. 44 (3): 511–517. doi:10.1080/18811248.2007.9711316.
- Pallàs-Areny & Webster 2001, pp. 262–63
- Buschow, K. H. J. (2001). Encyclopedia of Materials: Science and Technology. Elsevier. ISBN 0-08-043152-6.
- Kittel, Charles (1986). Introduction to Solid State Physics (6th ed.). John Wiley & Sons. ISBN 0-471-87474-4.
- Pallàs-Areny, Ramon; Webster, John G. (2001). Sensors and Signal Conditioning (2nd ed.). John Wiley & Sons. ISBN 978-0-471-33232-9.
- Spaldin, Nicola A. (2010). Magnetic Materials: Fundamentals and Applications (2nd ed.). Cambridge: Cambridge University Press. ISBN 9780521886697.
- Ibach, Harald; Lüth, Hans (2009). Solid-State Physics: An Introduction to Principles of Materials Science (4th ed.). Berlin: Springer. ISBN 9783540938033.
- Levy, Robert A. (1968). Principles of Solid State Physics. Academic Press. ISBN 978-0124457508.
- Fan, H. Y. (1987). Elements of Solid State Physics. Wiley-Interscience. ISBN 9780471859871.
- Dekker, Adrianus J. (1958). Solid State Physics. Macmillan. ISBN 9780333106235.
- Cusack, N. (1958). The Electrical and Magnetic Properties of Solids. Longmans, Green.
- Hall, J. R.; Hook, H. E. (1994). Solid State Physics (2nd ed.). Chichester: Wiley. ISBN 0471928054.
- Jullien, André; Guinier, Rémi (1989). The Solid State from Superconductors to Superalloys. Oxford: Oxford Univ. Press. ISBN 0198555547.
- Mendelssohn, K. (1977). The Quest for Absolute Zero: The Meaning of Low Temperature Physics. with S.I. units. (2nd ed.). London: Taylor and Francis. ISBN 0850661196.
- Myers, H. P. (1997). Introductory Solid State Physics (2nd ed.). London: Taylor & Francis. ISBN 0748406603.
- Kittel, Charles (1996). Introduction to Solid State Physics (7th ed.). New York [u.a.]: Wiley. ISBN 0471111813.
- Palmer, John (2007). Planar Ising correlations (Online ed.). Boston: Birkhäuser. ISBN 9780817646202.
- Bertoldi, Dalía S.; Bringa, Eduardo M.; Miranda, E. N. (May 2012). "Analytical solution of the mean field Ising model for finite systems". Journal of Physics: Condensed Matter. 24 (22): 226004. Bibcode:2012JPCM...24v6004B. doi:10.1088/0953-8984/24/22/226004. PMID 22555147. Retrieved 12 February 2013.
- Brout, Robert (1965). Phase Transitions. New York, Amsterdam: W. A. Benjamin, Inc.
- Rau, C.; Jin, C.; Robert, M. (1988). "Ferromagnetic order at Tb surfaces above the bulk Curie temperature". Journal of Applied Physics. 63 (8): 3667. Bibcode:1988JAP....63.3667R. doi:10.1063/1.340679.
- Skomski, R.; Sellmyer, D. J. (2000). "Curie temperature of multiphase nanostructures". Journal of Applied Physics. 87 (9): 4756. Bibcode:2000JAP....87.4756S. doi:10.1063/1.373149.
- López Domínguez, Victor; Hernàndez, Joan Manel; Tejada, Javier; Ziolo, Ronald F. (14 November 2012). "Colossal Reduction in Curie Temperature Due to Finite-Size Effects in CoFe
4 Nanoparticles". Chemistry of Materials. 25 (1): 6–11. doi:10.1021/cm301927z.
- Bose, S. K.; Kudrnovský, J.; Drchal, V.; Turek, I. (18 November 2011). "Pressure dependence of Curie temperature and resistivity in complex Heusler alloys". Physical Review B. 84 (17): 174422. arXiv:1010.3025. Bibcode:2011PhRvB..84q4422B. doi:10.1103/PhysRevB.84.174422.
- Webster, John G., ed. (1999). The Measurement, Instrumentation, and Sensors Handbook (Online ed.). Boca Raton, FL: CRC Press published in cooperation with IEEE Press. ISBN 0849383471.
- Whatmore, R. W. (1991). Electronic Materials: From Silicon to Organics (2nd ed.). New York, NY: Springer. ISBN 978-1-4613-6703-1.
- Kovetz, Attay (1990). The Principles of Electromagnetic Theory (1st ed.). Cambridge, UK: Cambridge University Press. ISBN 0-521-39997-1.
- Hummel, Rolf E. (2001). Electronic Properties of Materials (3rd ed.). New York [u.a.]: Springer. ISBN 0-387-95144-X.
- Pascoe, K. J. (1973). Properties of Materials for Electrical Engineers. New York, N.Y.: J. Wiley and Sons. ISBN 0471669113.
- Paulsen, J. A.; Lo, C. C. H.; Snyder, J. E.; Ring, A. P.; Jones, L. L.; Jiles, D. C. (23 September 2003). "Study of the Curie temperature of cobalt ferrite based composites for stress sensor applications". IEEE Transactions on Magnetics. 39 (5): 3316–18. Bibcode:2003ITM....39.3316P. doi:10.1109/TMAG.2003.816761. ISSN 0018-9464.
- Hwang, Hae Jin; Nagai, Toru; Ohji, Tatsuki; Sando, Mutsuo; Toriyama, Motohiro; Niihara, Koichi (March 1998). "Curie temperature Anomaly in Lead Zirconate Titanate/Silver Composites". Journal of the American Ceramic Society. 81 (3): 709–12. doi:10.1111/j.1151-2916.1998.tb02394.x.
- Sadoc, Aymeric; Mercey, Bernard; Simon, Charles; Grebille, Dominique; Prellier, Wilfrid; Lepetit, Marie-Bernadette (2010). "Large Increase of the Curie temperature by Orbital Ordering Control". Physical Review Letters. 104 (4): 046804. arXiv:0910.3393. Bibcode:2010PhRvL.104d6804S. doi:10.1103/PhysRevLett.104.046804. PMID 20366729.
- Kochmański, Martin; Paszkiewicz, Tadeusz; Wolski, Sławomir (2013). "Curie–Weiss magnet: a simple model of phase transition". European Journal of Physics. 34 (6): 1555–73. arXiv:1301.2141. Bibcode:2013EJPh...34.1555K. doi:10.1088/0143-0807/34/6/1555.
- "Pierre Curie – Biography". Nobelprize.org. Nobel Media AB. 2014. Retrieved 14 March 2013.
- "TMT-9000S Soldering and Rework Station". thermaltronics.com. Retrieved 13 January 2016.
|
Here you will explore linear functions and graphs. You will begin with functions. You will learn how to identify, describe and write functions. This will involve function rules, function notation and function tables. Next, you will learn about slope and about how to use intercepts. Then, you will explore linear equations and slope together and will learn new ways to write linear equations. Next, you will recognize, solve and graph linear systems of equations. Finally, you will solve and graph linear inequalities.
First, you learned all about functions. You learned how to identify functions, how to evaluate function rules and how to use input/output tables. You learned how to find solutions for equations in two variables, and you learned how to graph functions.
Then you learned about slope. You learned to identify the slope in an equation as well as on a graph. You learned how to write equations in different forms including standard form and slope-intercept form. You saw how to identify the slope and the y-intercept in both an equation and on a graph.
Finally, you learned about linear systems of equations and inequalities. With systems of linear equations, you learned how to identify them and to solve them through substitution and through graphing. Then you learned how to solve linear inequalities and how to graph them on the coordinate plane.
|
Free Kindergarten Counting and Number Sense Worksheets
Welcome to TLSBooks.com! This page features numerous math worksheets to help the kindergarten child improve their understanding of numbers and the counting sequence. As always, you are encouraged to review materials to ensure they challenge and enrich the learning process.
We hope you enjoy the worksheets on this page. In addition to the number sense and counting worksheets found here we have numerous kindergarten addition and subtraction worksheets available. Our preschool shapes page has additional worksheets suitable for the kindergarten child.
In order to view and print worksheets from this site you will need Adobe Reader version 6 or later. You may download the latest version of the free Adobe Reader here.
Printing Tip: If a worksheet page does not appear properly, reload or refresh the .pdf file.
Kindergarten Counting and Number Sense Worksheets
Kindergarten Number Sense WorksheetsShow That Number Worksheet 2 - Draw objects to match the number in each row.
Pre-Math worksheet 1 - Students will follow the directions and are required to use correct colors, count to 5, and recognize first, last, and middle.
Add One, Add Two - Students will practice pencil control when they follow the directions and add one or two shapes to the picture.
Count and Color Series II, Worksheets 3-4 - Students will practice counting to ten and improve fine motor skills when they color items to match each number shown.
See, Say, Write, and Count - Look at the numerals 1-10, say each number, trace the number word, write the number word, and count the items.
Largest and Smallest Numbers - This fun robot theme worksheet makes identifying large and small numbers a lot of fun!
Recognizing Numbers 1-10 - Find and color the numbers 1-10 on this worksheet.
Which Number is Bigger? - Students will circle the largest number in each row and write it on the line.
Number Match - Draw a line to match objects to numbers.
My First Number Word Search - Find and circle the number words one through ten in this math word search puzzle.
Ten! A Halloween Worksheet - Draw additional items in each row to equal ten.
Larger and Smaller Numbers - Identify the larger or smaller numbers up to 15.
Missing Numbers to Twenty - Students will write in the missing numbers to complete each sequence of four numbers.
Number Fun Worksheets 3 and 4 - Identifying greatest and least numbers as well as ordering numbers to 20.
Fill in the Missing Numbers - This packet includes ten, half-page worksheets in which children fill in the missing numbers to 20.
Number Words Worksheet A and B - Students will match the number on the left with the number word on the right.
Dino Numbers 1-9 and 10-18 - These two worksheets reinforce number word recognition to eighteen. Students will read each number word and write the number on the dinosaur.
Let's Count - Students will practice printing the numbers in order from 0-9 and 10-19 on two worksheets.
Dot-to-Dot Puppy - Connect the dots from 1 to 30 and color the picture.
Number Charts Set 1 - This set of seven number charts includes counting to 100, count by 2's, 3's, 4's, odd numbers, even numbers, and random missing numbers.
Missing Numbers to 10 - The first worksheet in this set instructs students to fill in the missing numbers to 5. The second worksheet requires the student to fill in missing numbers up to 10.
Counting WorksheetsCount and Circle Worksheet 1 - Read the numeral at the beginning of each row and circle that many items (up to 5).
Following Directions Worksheet 2 - This worksheet emphasizes color recognition and counting skills.
Counting with Kittens - Look at the picture then answer the questions about numbers to 5.
Counting with Puppies - Look at the picture then answer the questions about numbers to 5.
Fun With Fruit - Students will count the pieces of fruit in each row and then follow specific directions to circle, draw rectangles around, and count the various items.
One Two Three Worksheets 1-2 - The worksheets presented in this file require students to count, draw, add, and subtract within 5.
St. Patrick's Day Ordinal Numbers - Students will follow the directions when coloring the first through fifth items in each row.
Count and Color - Students will follow the directions and color the correct number of boxes within 5.
Happy Shapes - This fun worksheet reinforces shape and color recognition while counting to 7.
Count and Draw Carrots and Strawberries - Draw the matching number of carrots or strawberries in each row.
Circle Ten - Circle ten objects from each set.
Halloween Count and Color Worksheets 1 and 2 - Students will count to 8 and build color recognition with these fun Halloween math worksheets.
Counting Dinosaur Dots to 10 - Students will count up to 10 dots on each dinosaur and write the number on the line.
Farm Tally Mark Worksheet - Students will make a tally mark representing the number of farm animals with 2 or 4 legs. They will then add the tally marks to show the total number of tally marks made.
Count and Color Dogs - Students will count to 15, write the number, and color the dogs.
Count and Color Dragon Spots - Students will count to 20 and make two pictures look alike.
Counting to Thirty - Count the objects in each group, and write the number on the line.
All worksheets created by Tracey Smith
Did you know that . . .
This page features over 55 kindergarten math worksheets.
|
If you have used a network file server of any kind, you know the convenience of being able to access files that reside at a shared location. Using a word-processing application that runs on your computer, you can easily open a document that physically resides on the file server.
Now, imagine a word processor that enables you to open and view a document that resides on any computer on the Internet. You can view the document in its full glory, with formatted text and graphics. If the document makes a reference to another document (possibly one that resides on yet another computer), you can open that linked document by clicking the reference. That kind of easy access to distributed documents is essentially what the Web provides.
Of course, the documents have to be in a standard format, so that any computer (with appropriate Web software) can access and interpret them. In addition, a standard protocol is necessary for transferring Web documents from one system to another.
The standard Web document format is HyperText Markup Language (HTML), and the standard protocol for exchanging Web documents is HyperText Transfer Protocol (HTTP).
A Web server is the software that sends HTML documents (and other files as well) to any client that makes the appropriate HTTP requests. Typically, a Web browser is the client software that actually downloads an HTML document from a Web server and displays the contents graphically.
The World Wide Web is a combination of Web servers and HTML documents that contain a variety of information. Imagine the Web as a giant book whose pages are scattered throughout the Internet. You use a Web browser running on your computer to view the pages.
The Web pages-HTML documents-are linked by network connections that resemble a giant spider web, so you can see how the Web got its name. The 'World Wide' part refers to the fact that the Web pages are linked around the world.
Like the pages of real books, Web pages contain text and graphics. Unlike the pages of real books, however, Web pages can contain multimedia information such as images, video clips, digitized sound, and cross-references, called links, that can actually take the user to the page referred to.
The links in a Web page are references to other Web pages. You follow (click) the links to go from one page to another. The Web browser typically displays these links as underlined text (in a different color) or as images. Each link is like an instruction to the reader-such as 'For more information, please consult Chapter 14'-that you might find in a real book. In a Web page, all you have to do is click the link, and the Web browser brings up the page referred to, even if it's on a different computer.
The term hypertext refers to nonlinear organization of text (as opposed to the sequential, linear arrangement of text in most books or magazines). The links in a Web page are referred to as hypertext links; by clicking a link, you can jump to a different Web page, which is an example of nonlinear organization of text.
This arrangement raises a question. In a real book, you might ask the reader to go to a specific chapter or page. How does a hypertext link indicate the location of the Web page in question? Each Web page has a special name, called a Uniform Resource Locator (URL). A URL uniquely specifies the location of a file on a computer, as shown in Figure 14-1.
The directory path in Figure 14-1 can contain several subdirectories as indicated by the slash.
As Figure 14-1 illustrates, a URL has this sequence of components:
Protocol-This is the name of the protocol the Web browser uses to access the data that reside in the file the URL specifies. In Figure 14-1, the protocol is http://, which means that the URL specifies the location of a Web page. Following are the common protocol types and their meanings:
file:// specifies the name of a local file that is to be opened and displayed. You can use this URL to view HTML files without having to connect to the Internet. For example, file:///usr/share/doc/HTML/index.html opens the file /usr/share/doc/HTML/index.html from your Red Hat Linux system.
ftp:// specifies a file that is accessible through File Transfer Protocol (FTP). For example, ftp://ftp.purdue.edu/pub/uns/NASA/nasa.jpg refers to the image file nasa.jpg from the /pub/uns/NASA/ directory of the FTP server ftp.purdue.edu. (If you want to access a specific user account by FTP, use the URL of the form ftp://username:firstname.lastname@example.org .com/ with the user name and password embedded in the URL.)
http:// specifies a file that is accessible through the HyperText Transfer Protocol (HTTP). This is the well-known format of URLs for all websites, such as http://www.redhat.com/ for Red Hat's home page.
https:// specifies a file that is to be accessed through a Secure Sockets Layer (SSL) connection, which is a protocol designed by Netscape Communications for encrypted data transfers across the Internet. This form of URL is typically used when the Web browser sends sensitive information such as credit card number, user name, and password to a Web server. For example, a URL such as https://some.site.com/secure/takeorder.html might display an HTML form that requests credit card information and other personal information such as name, address, and phone number.
mailto:// specifies an email address you can use to send an email message. For example, mailto:email@example.com refers to the Webmaster at the host someplace.com.
news:// specifies a newsgroup you can read by means of the Network News Transfer Protocol (NNTP). For example, news://news.psn.net/comp.infosystems. www.authoring.html accesses the comp.infosystems. www.authoring.html newsgroup at the news server news.psn.net. If you have a default news server configured for the Web browser, you can omit the news server's name and use the URL news:comp.infosystems. www.authoring.html to access the newsgroup.
telnet:// specifies a user name and a system name for remote login. For example, the URL telnet://guest:firstname.lastname@example.org/ logs in to the host someplace.com with the user name guest and password bemyguest.
Domain name-This contains the fully qualified domain name of the computer on which resides the file this URL specifies. You can also specify an IP address in this field (see Chapter 6 for more information on IP addresses). The domain name is not case sensitive.
Port address-This is the port address of the server that implements the protocol listed in the first part of the URL (see Chapter 6 for a discussion of port addresses). This part of the URL is optional; there are default ports for all protocols. The default port for HTTP, for example, is 80. Some sites, however, may configure the Web server to listen to a different port. In such a case, the URL must include the port address.
Directory path-This is the directory path of the file being referred to in the URL. For Web pages, this field is the directory path of the HTML file. The directory path is case sensitive.
Filename-This is the name of the file. For Web pages, the filename typically ends with .htm or .html. If you omit the filename, the Web server returns a default file (often named index.html). The filename is case sensitive.
HTML anchor-This optional part of the URL makes the Web browser jump to a specific location in the file. If this part starts with a question mark (?) instead of a hash mark (#), the browser takes the text following the question mark to be a query. The Web server returns information based on such queries.
The HyperText Transfer Protocol-the protocol that underlies the Web-is called HyperText because Web pages include hypertext links. The Transfer Protocol part refers to the standard conventions for transferring a Web page across the network from one computer to another. Although you really do not have to understand HTTP to set up a Web server or use a Web browser, I think you'll find it instructive to know how the Web works.
Before I explain anything about HTTP, you should get some firsthand experience of it. On most systems, the Web server listens to port 80 and responds to any HTTP requests sent to that port. Therefore, you can use the Telnet program to connect to port 80 of a system (if it has a Web server) to try some HTTP commands.
To see an example of HTTP at work, follow these steps:
Make sure your Linux PC's connection to the Internet is up and running. (If you use SLIP or PPP, for example, make sure that you have established a connection.)
Type the following command:
telnet www.gao.gov 80
After you see the Connected... message, type the following HTTP command:
GET / HTTP/1.0
and press Enter twice. In response to this HTTP command, the Web server returns some useful information, followed by the contents of the default HTML file (usually called index.html).
The following is what I get when I try the GET command on the U.S. General Accounting Office's website:
telnet www.gao.gov 80 Trying 22.214.171.124... Connected to www.gao.gov. Escape character is '^]'. HTTP/1.1 200 OK Date: Sat, 15 Feb 2003 21:35:20 GMT Server: Apache/1.3.27 (Unix) PHP/4.2.3 mod_ssl/2.8.12 OpenSSL/0.9.6g X-Powered-By: PHP/4.2.3 Connection: close Content-Type: text/html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org /TR/REC-html40/loose.dtd"> <HTML> <HEAD> <TITLE>The United States General Accounting Office</TITLE> ... (lines deleted) ... </HEAD> ... (lines deleted) ... </BODY> </html> Connection closed by foreign host.
When you try this example with Telnet, you see exactly what the Web server sends back to the Web browser. The first few lines are administrative information for the browser. The server returns this information:
A line that shows that the server uses HTTP protocol version 1.1 and a status code of 200 indicating success: HTTP/1.1 200 OK
The current date and time. A sample date and time string looks like this:
Date: Sat, 15 Feb 2003 21:35:20 GMT
The name and version of the Web-server software. For example, for a site running the Apache Web server version 1.3.27 with the PHP hypertext processor version 4.2.3, the server returns the following string:
Server: Apache/1.3.27 (Unix) PHP/4.2.3 mod_ssl/2.8.12 OpenSSL/0.9.6g
The mod_ssl phrase refers to the fact that the Apache server has loaded the mod_ssl module that supports secure data transfers through the Secure Sockets Layer (SSL) protocol.
The type of document the Web server returns. For HTML documents, the content type is reported as follows:
The document itself follows the administrative information. An HTML document has the following general layout:
<title>Document's title goes here</title> <html> <body optional attributes go here > ... The rest of the document goes here. </body> </html>
You can identify this layout by looking through the listing that shows what the Web server returns in response to the GET command. Because the example uses a Telnet command to get the document, you see the HTML content as lines of text. If you were to access the same URL (http://www.gao.gov) with a Web browser (such as Mozilla), you would see the page in its graphical form, as shown in Figure 14-2.
The example of HTTP commands shows the result of the GET command. GET is the most common HTTP command; it causes the server to return a specified HTML document.
The other two HTTP commands are HEAD and POST. The HEAD command is almost like GET: it causes the server to return everything in the document except the body. The POST command sends information to the server; it's up to the server to decide how to act on the information.
|
To understand how the absolute value can be applied to the real world, we’ll look at two topics:
- What is considered normal with regard to the temperature of the human body and
- Fuel economy of two vehicles: a Honda Odyssey and a Nissan Altima.
The absolute value of a number can be thought of as the value of the number without regard to its sign. For example |3| = 3 and |-5| = 5. When plotted on a number line, it’s the distance from zero.
We use the absolute value when subtracting a positive number and a negative number. Subtract one number from the other and give the result the sign of the number that has the greater absolute value.
2 – 9 = -7 because the difference between 9 and 2 is 7 and the -9 has the larger absolute value making the result negative.
What would the graph of y = |x| look like?
What is Normal?
An absolute value function can be used to show how much a value deviates from the norm. The average internal body temperature of humans is 98.6° F. The temperature can vary by as much as .5° and still be considered normal. On a number line, the normal temperature range for a healthy human appears below.
As a function, the equation is y = |x – 98.6|.
The x-axis corresponds to the temperature of the person in question. What does the y-axis represent?
To graph the equation y = |x – 98.5| substitute various values in for x and plot the points.
Your resulting graph should look like the following:
However, to represent a normal temperature, the range (or y values) of the graph must be limited to 0 ≤ y ≤ .5 or
Therefore, the final graph would be a “V” from the point (98.1, .5) down to (98.6, 0) and back up to (99.1, .5).
Minivans, due to their size and purpose, are not typically very fuel efficient vehicles.
The EPA’s estimates for fuel efficiency for this van are shown below.
Based on what you now know about the absolute value function, can you write an equation that describes the van’s estimated city miles per gallon fuel efficiency?
What would the graph look like?
To be within the government’s estimated fuel efficiency range for city driving for this van, what would the range of the function be? (Note the small print below the 17 MPG rating showing expected variation in economy.)
Now write an equation for the estimated highway fuel efficiency. What would the range of the function be for highway driving? What would the graph look like?
On a recent outing, the van was filled with gas. Each time the tank is filled, Trip B is reset. Based on the images below, calculate the gas mileage for the previous tank of gas.
Was the MPG within the expected range?
Was the van driven primarily in the city or on the highway?
What was the cost of gas per gallon at the time it was filled?
To learn more or extend your knowledge on these topics see:
EPA Investigation Prompts Carmaker to Correct Inflated Mileage Claims
These are fun / valuable examples I can use for my algebra classes – thanks!
|
An angle is defined as the amount of rotation between two rays. There are two systems for measuring angles in mathematics: degrees and radians. A full rotation in degrees is . A full rotation in radians is (tau) radians or approximately radians. For higher level math, radians are the preferred system to measure angles.
An acute angle is an angle that is smaller than 90 degrees or PI fourths.
An obtuse angle is an angle that is larger than 90 degrees or PI fourths.
A Perpendicular angle, sometimes also referred to as a square angle, is exactly 90 degrees or PI fourths.
Complementary angles can visually be denoted as two angles who sum to a perpendicular or square angle.
Supplementary angles can visually be denoted as two angles who sum to 180 degrees or PI degrees.
The degree angle system divides a full rotation into degrees.
Radians Versus DegreesWumbo (internal)
|
all Maths worksheets by Subject
Use these word problems to help your child practise ratio and proportion. Answers included at the bottom of the sheet.
Use this worksheet for children to work out different coordinates according to their colour on the grid.
This worksheet shows a range of numbers with one decimal place that need to be put into order from largest to smallest.
This worksheet gives a range of decimal numbers for children to order, along with instructions on how to do this.
A worksheet to help your child understand what happens to a decimal number when it is multiplied by 10, 100 and 1000.
A worksheet to show your child how to multiply a number with two decimal places by a whole number. Answers sheet included.
Your child can use this grid to test themselves on their multiplication facts. They then need to relate their multiplication facts to their division facts.
A worksheet encouraging children to use co-ordinates in the four quadrants by drawing up a treasure map.
In KS1 maths your child will learn to calculate using a number square. Print out our versions (plain, jungle-themed and very colourful!) to help them practise addition and subtraction at home.
|
The different types of problems for each operation are not subtle variations: They are major differences. Also, for each operation, the conventional algorithm for that operation reinforces only one of the types at the expense of the other: The standard method for addition reflects “combining,” that for subtraction, “take away,” that for multiplication, “grouping/repeated addition” and that for division, “grouping/repeated subtraction.” This is one of the reasons many elementary school students have difficulty with arithmetic word problems that involve the other types: Unwittingly, they’ve been “taught” not to understand them. Another reason, and perhaps the most compelling, is that eight different types of problems is too many to keep straight in one’s mind.
The Combiners and its companion book, , are about a new way of teaching arithmetic word problems, a way that reduces the eight types of problems to two broad categories, “combining” or “separating,” and then how it’s done, “just” or “neatly,” where “just” means not necessarily in equal amounts and “neatly” means by or into 2s, 3s, 4s .... The Combiners does so for addition and multiplication featuring Motley Crab Adder and Sir Crab Multiplier, respectively. The Separaters does so for subtraction and division spotlighting Scruffy Scrap and “Neat” Nic, respectively. The actions of the characters are then examined using the following 2-step heuristic, in particular, the 2nd step, since, as the titles of the books suggest, we already know what’s happening in The Combiners, combining, and we already know what’s happening in The Separaters, separating.
1. What’s happening, combining or separating?
2. How’s it happening, “just” or “neatly” (by or into 2s, 3s, 4s…)?
As the diagram on the back cover shows, the heuristic transforms combining/separating actions into operations: If the action is combining and “just” combining, it’s addition. If combining and “neatly” (by 2s, 3s, 4s…), multiplication. If separating and “just” separating, subtraction. If separating and “neatly” (into 2s, 3s, 4s…), division. ALL arithmetic word problems are combining or separating problems and then “just” or “neatly” problems. ALL may be solved with the 2-step heuristic.
The following illustrate how each of the two types of addition, subtraction, multiplication and division problems are combining or separating problems and may be reduced to “just” or “neatly” problems.
Addition as “Just” Combining
A tent pole is stored in three sections. The bottom section is 2 feet long, the middle one 4 feet long, and the top one 3 feet long. How tall is the pole?
Using snap cubes for feet, a 2-stick, 4-stick, and 3-stick would be joined or “just” combined to show the height of the pole.
|
Bilingualism is the ability of an individual to speak in two languages and to utilize them for different purposes. The degree of bilingualism is defined as the levels of linguistic proficiency that a bilingual must attain in both languages (Ng & Wigglesworth, 2007). There are various factors that may affect the acquisition of the degree of bilingualism in home, school and work settings, including the age at which the language is acquired, to whom the language is utilized, the manner in which the language is used, and the frequency of usage of the language (Ng & Wigglesworth, 2007).
There are two contexts in which bilinguals acquire their skills in using two languages: primary and secondary. Primary contexts pertain to a child’s acquisition of both languages in a naturalistic way in the absence of any structured instruction, while secondary contexts pertain to a child’s acquisition of one of the languages in a formal setting, usually school (Ng & Wigglesworth, 2007). Children, who are able to acquire two languages in a primary context during their infanthood, adopt the languages due to natural input in the environment, usually provided by the parents, siblings, caregivers (Ng & Wigglesworth, 2007).
However, when the child enters his or her early childhood, the input may be provided by other sources, like the wider community or the extended family (Ng & Wigglesworth, 2007). According to Ng and Wigglesworth (2007), age plays a key role in the development of bilingualism because there is a strong relationship between the age of acquisition and the ultimate achievement of language proficiency at different linguistic levels.
The authors add that attitudes, motivation, and contextual factors such as exposure have been found to affect strongly on the final attainment of the learners’ language proficiency level. Bilingualism has a psychosocial dimension that can greatly affect a child (Bialystok, 2001). The language a person speaks has a role in the formation of his or her identity, and speaking a language that is not completely natural has the possibility to interfere with the child’s construction of self (Biolystok, 2001).
A child who is a bilingual due to relocation, especially unwanted relocation, may dislike the new community language he or she has learned despite of his or her proficiency with it (Biolystok, 2001). Factors that affect bilingual children must account the attitudes to the language and the role of language in forming ethnic and cultural affiliations (Bialystok, 2001). The reasons why children become bilingual include education, immigration, extended family, dislocation, temporary residence in another country, or being born in a place where bilingualism is normal (Bialystok, 2001).
Social factors that affect the child’s development of bilingualism include parents’ educational level and their expectations for children’s education, degree, and role of literacy in the home and the community; language proficiency in the main language used; objectives for using the second language; support of the community for the second language; and identity with the group who speaks the second language (Biolystok, 2001). The quality and quantity of the interaction also affects the child’s acquisition of two languages.
Attitude has been associated to the language proficiency, bilingual’s usage of two languages, bilingual’s perception of other communities and of themselves (Ng & Wigglesworth, 2007). Attitude has also been linked to the strength of bilingual communities and to the loss of language within the community. Furthermore, it is a powerful force that emphasizes the experience of being bilingual and the willingness of members of a minority group to contribute to the maintenance of a minority language (Ng & Wigglesworth, 2007).
Language attitudes comprise of three major components of cognition, affect, and readiness for action. The affective component may not be similar with the cognitive component, while the readiness for action component analyzes whether feelings or thoughts in the cognitive and affective components translate into action (Bee, Wigglesworth). There are different types of bilingual acquisition in childhood.
In the ‘one person, one language’ type of acquisition, parents have different native languages with each having some degree of competence in the other’s language, the language of one of the parents is the dominant language in the community, and the parents can speak their own language to the child from birth (Romaine, 1995). In the ‘non-dominant home language’ type, the parents have different native languages, the language of one of the parents is the dominant language in the community, and both parents speak the non-dominant language to the child who is completely exposed to the main language only when outside the home (Romaine, 1995).
In the ‘non-dominant home language without community support’ type, the parents use the same mother tongue, the dominant language is not utilized by the parents, and the parents speak their own language to the child (Romaine, 1995). In the ‘double non-dominant home language without community support’ type of acquisition, the parents are using different native languages, the dominant language is different from either of the languages of the parents, and the parents each use their own language when speaking to the child from birth (Romaine, 1995).
In the ‘non-native parents’ type of acquisition, the parents use the same native language, the dominant language is similar with that of the parents, and one of the parents always speak to the child in a language which is not his or her mother tongue (Romaine, 1995). In the ‘mixed language’ type of acquisition, the parents are both bilingual, the community may also be bilingual, and parents may code-switch and mix two different languages (Romaine, 1995).
Romaine (1995) explains that various individual factors may affect the outcome in each type of bilingual acquisition in childhood, including the amount and kind of exposure to the minor language, the consistency of parents in their language choice, attitudes of children and parents towards bilingualism, and the individual personalities of children and parents. Types of Bilingualism A child learns his or her first language during his her five years of life. He or she spends several hours of listening, repeating and learning his or her first language by trial and error.
The second language can be learned by a child by various clues that assist him or her to understand the message such as the intonation and by memorizing rules in grammars or lists of words. The desire of a child to communicate using the second language is not powerful, particularly in a school environment. A child can learn a second language easier when he or she is involved or lived in a community where the second language is spoken because it provides him or her a chance to use it.
The three types of bilingualism are compound, coordinate and sub-coordinate bilingualism. Both coordinate and compound bilingualism are categorized as forms of early bilingualism because they are developed in early childhood. The sub-coordinate bilingualism is developed when a second language is acquired by a child after age 12. In coordinate bilingualism, an individual learns the languages in different environments and the words of the two languages are separated with each word having its own specific meaning (Romaine, 1995).
A child may acquire coordinate bilingualism when his or her parents have different native languages and each parent speak to the child using his or her own native language. He or she develops two different linguistic systems that he or she can handle them at ease. Another situation wherein a child can adopt coordinate bilingualism is when the mother tongue mastered by a child is adopted by parents who use a different language. The languages in the coordinate bilingualism are independent. A coordinate bilingual has two linguistic systems and two sets of meanings linked to them (Romaine, 1995).
In compound bilingualism, an individual acquires the two languages in the same circumstances, where they are utilized at the same time in order to have a mixed representation of the languages in the brain (Romaine, 1995). A child may acquire compound bilingualism when both parents are bilingual and use two languages when speaking to the child indiscriminately. He or she will learn to speak both languages without making an effort and accent but will never master all the difficulties of using either of the two languages.
A child who acquires compound bilingualism will not have a mother tongue. The languages in compound bilingualism are interdependent. A compound bilingual consists of one set of meanings and two linguistic systems linked to them (Romaine, 1995). In sub-coordinate bilingualism, an individual interprets words of his or her weaker language through the words of the stronger language (Romaine, 1995). The dominant or main language utilized by a sub-coordinate bilingual plays a role as a filter for the weaker language (Romaine, 1995).
The sub-coordinate bilingualism consists of a primary set of meanings formed through their first language and another linguistic system tied to them (Romaine, 1995). The Positive Aspects of Bilingualism According to Cummins, bilingualism has positive benefits to a child’s educational and linguistic development. The author adds that a child attains a deeper understanding of language and how to utilize it effectively when he or she continues to develop his or her ability in two or more languages during his or her entire years in primary school.
A child has a chance to practice more in processing language, particularly when he or she develops literacy in both and he or she is capable of comparing and contrasting the ways his or her two languages create reality (Cummins). The research study indicates that a bilingual child may also develop more flexibility in his or her thinking because of the processing information through the use of two different languages (Cummins).
Other positive effects of bilingualism include increase of mental alertness, broadening of horizon, and improved understanding of the relativity of all things (Appel & Muysken, 2006). A research study of 15-year-old Spanish/English bilingual children suggested that bilingualism encouraged creative thinking because of the greater flexibility in cognition demonstrated by bilinguals due to the fact that they better able to differentiate form and content (Romaine, 1995).
Another research study also mentioned that bilingual children have a better understanding of concept formation, which is major part of intellectual development, because they were involved to a more complicated environment and an enormous amount of social interaction compared to children who were gaining only one language (Romaine, 1995). The superiority of bilingual children to monolingual children in terms of various tasks is dependent on their high levels of selective attention, which is the main mechanism of their cognitive performance (Romaine, 1995).
One source of improving the bilingual children’s flexibility and creativity may come from a variety of semantic networks related with words in each language (Romaine, 1995). The relation between bilingualism and the social context of language acquisition indicates a positive benefit to bilingualism. The Negative Effects of Bilingualism Child bilingualism has negative effects on linguistic skills because he or she has a tendency to have a verbal deficit with respect to active and passive vocabulary, length of sentence, and the usage of complex and compound sentences (Appel & Muysken, 2006).
Research study has also claimed that a bilingual child demonstrated more deviant forms in his or her speech, like unusual word order and morphological errors (Appel & Muysken, 2006). Bilingualism could also endanger the intelligence of a whole ethnic community and result to split personalities (Romaine, 1995). A bilingual child has a deficit in his or her language growth and a delay in his or her mother tongue development. Some psychologists have also stated that a bilingual child is more inclined to stuttering because of the syntactic overload brought by processing and producing two languages (Romaine, 1995).
According to Appel and Muysken (2006), it is stated that speaking two languages is a negative factor in personality or identity development because bilingual persons are anticipated to experience a conflict of values, identities, and world views due to strong relation to the two different languages. The authors add that research studies have indicated that bilingualism may have negative effects on personality development but only when social conditions are not favorable.
The emotional and social difficulties of certain bilingual persons are not due to bilingualism as a cognitive phenomenon but by the social context (Appel & Muysken, 2006). In order to avoid the degree of language loss in children, Cummins suggests that parents should form a strong home language policy and offer opportunities for children to broaden the functions for which they utilize the mother tongue, particularly in reading and writing, and the circumstances in which they can utilize it, like visits to the country of origin.
Teachers have an important role in helping bilingual children maintain and develop their mother tongues by interacting to them strong positive messages on the value of acquiring additional languages and that bilingualism is a key linguistic and intellectual achievement (Cummins). They must also create an instructional environment where the cultural and linguistic experience of a child is actively accepted (Cummins). References Appel, R. & Muysken, P. (2006). Language Contact and Bilingualism. Netherlands: Amsterdam University Press. Bialystok, E.
(2001). Bilingualism in Development: Language, Literacy, and Cognition. England: Cambridge University Press. Cummins, J. Bilingual Children’s Mother Tongue: Why Is It Important for Education? Retrieved June 7, 2009, from http://74. 125. 153. 132/search? q=cache:f490N3_lOpAJ:www. iteachilearn. com/cummins/ mother. htm+positive+effects+of+bilingualism&cd=5&hl=en&ct=clnk&gl=ph Ng, B. C. & Wigglesworth, G. (2007). Bilingualism: An Advanced Resource Book. U. S. : Routledge. Romaine, S. (1995). Bilingualism (2nd ed. ). Malden, M. A. : Wiley-Blackwell.
Courtney from Study Moose
Hi there, would you like to get such a paper? How about receiving a customized one? Check it out https://goo.gl/3TYhaX
|
In mathematics (particularly in differential calculus), the derivative is a way to show instantaneous rate of change: that is, the amount by which a function is changing at one given point. For functions that act on the real numbers, it is the slope of the tangent line at a point on a graph. The derivative is often written as ("dy over dx", meaning the difference in y divided by the difference in x). The d is not a variable, and therefore cannot be cancelled out. Another common notation is —the derivative of function at point .
Definition of a derivative[change | change source]
That is, as the distance between the two x points (h) becomes closer to zero, the slope of the line between them comes closer to resembling a tangent line.
Derivatives of functions[change | change source]
Linear functions[change | change source]
Derivatives of linear functions (functions of the form with no quadratic or higher terms) are constant. That is, the derivative in one spot on the graph will remain the same on another.
When the dependent variable directly takes 's value (), the slope of the line is 1 in all places, so regardless of where the position is.
When modifies 's number by adding or subtracting a constant value, the slope is still 1, because the change in and do not change if the graph is shifted up or down. That is, the slope is still 1 throughout the entire graph and its derivative is also 1.
Power functions[change | change source]
Power functions (in the form of ) behave differently from linear functions, because their exponent and slope vary.
Power functions, in general, follow the rule that . That is, if we give a the number 6, then
Another example, which is less obvious, is the function . This is essentially the same, because 1/x can be simplified to use exponents:
In addition, roots can be changed to use fractional exponents, where their derivative can be found:
Exponential functions[change | change source]
An exponential is of the form , where and are constants and is a function of . The difference between an exponential and a polynomial is that in a polynomial is raised to some power, whereas in an exponential is in the power.
Example 1[change | change source]
Example 2[change | change source]
Logarithmic functions[change | change source]
The derivative of logarithms is the reciprocal:
Take, for example, . This can be reduced to (by the properties of logarithms):
The logarithm of 5 is a constant, so its derivative is 0. The derivative of is . So,
For derivatives of logarithms not in base e, such as , this can be reduced to:
Trigonometric functions[change | change source]
Properties of derivatives[change | change source]
Derivatives can be broken up into smaller parts where they are manageable (as they have only one of the above function characteristics). For example, can be broken up as:
Uses of derivatives[change | change source]
A function's derivative can be used to search for the maxima and minima of the function, by searching for places where its slope is zero.
Derivatives are used in Newton's method, which helps one find the zeros (roots) of a function..One can also use derivatives to determine the concavity of a function, and whether the function is increasing or decreasing.
Related pages[change | change source]
References[change | change source]
- "List of Calculus and Analysis Symbols". Math Vault. 2020-05-11. Retrieved 2020-09-15.
- Weisstein, Eric W. "Derivative". mathworld.wolfram.com. Retrieved 2020-09-15.
- "The meaning of the derivative - An approach to calculus". themathpage.com. Retrieved 2020-09-15.
|
Measuring the Diameter of the Earth's Core with Seismic Waves Around the Globe
AbstractWhen an earthquake occurs, seismic shock waves travel out through the earth from the source of the event. The shock waves travel through the earth or along the Earth's surface, and can be recorded at remote monitoring stations. Some of the waves that travel through the earth are blocked or refracted by the Earth's liquid core, which means that monitoring stations located certain distances from the earthquake do not detect these waves. This creates a "seismic shadow" that you can use to estimate the diameter of the Earth's core. This geology science project shows you how.
Estimate the diameter of the Earth's core by measuring seismic waves around the globe.
Andrew Olson, Ph.D., Science Buddies
Teisha Rowland, Ph.D., Science Buddies
Deana Nickerson, U.S. Air Force
Cite This PageGeneral citation information is provided here. Be sure to check the formatting, including capitalization, for the method you are using and update your citation, as needed.
Last edit date: 2018-04-12
The shock waves spreading out from an earthquake are called seismic waves (from the Greek word for earthquake). There are two general types of seismic waves: body waves and surface waves.
- Body waves travel through the Earth's interior.
- Surface waves, which are analogous to water waves, travel just beneath the Earth's surface.
There are two types of body waves, P-waves and S-waves. P-waves (also called primary waves) are compression waves. Like sound waves, they consist of compressions and rarefactions of the material through which they travel. The compressions and rarefactions are in the same direction that the wave is traveling. S-waves (also called secondary waves) are transverse (or shear) waves, meaning that the ground moves perpendicularly to the direction of travel. S-waves have much higher amplitude than P-waves, but travel more slowly. S-waves carry more destructive force than P-waves.
Another difference between P- and S-waves is that S-waves cannot travel through the Earth's core, while P-waves can. S-waves can therefore be detected by seismometers near the epicenter of an earthquake, but not by more distant seismometers. P-waves can be detected by both local and some distant seismometers. Figure 1 below is a cross-section of the Earth, showing how P-waves and S-waves travel through the various layers. Because of the varying density of the layers, the waves are refracted as they pass through the different layers. This is analogous to the refraction of light waves when they pass from air to water, for example.
Figure 1. "Cross section of the whole Earth, showing the complexity of paths of earthquake waves. The paths curve because the different rock types found at different depths change the speed at which the waves travel. Solid lines marked P are compressional waves; dashed lines marked S are shear waves. S waves do not travel through the core but may be converted to compressional waves (marked K) on entering the core (PKP, SKS). Waves may be reflected at the surface (PP, PPP, SS)." (Robertson, date unknown).
Looking at how P- and S-waves travel through the Earth has helped seismologists determine the composition of Earth's core. To understand this, look at Figure 2 below, which is another cross-section of the Earth, this time showing how Earth's core causes a seismic shadow, which is an area where seismic waves from a given earthquake cannot be detected. Seismologists found that if an earthquake is detected more than a certain distance away from its epicenter, the seismic waves detected are radically different. Specifically, after a certain distance, the primary P- and S-waves disappear almost completely. This is because the primary S-waves cannot travel through the Earth's core, and the primary P-waves that go through the core are refracted, as shown in Figure 2 below. However, surface waves are still present (as the surface waves travel just below Earth's surface, not through the core). Even further away from the epicenter, some P waves are detected, as these are the P-waves that traveled through the Earth's core and were refracted (shown as path "K" in Figure 2). However, there are still no primary S-waves. Seismologists determined that Earth must have a molten, fluid outer core to explain these observations—this explains the different seismic shadows observed for the P- and S-waves.
Figure 2. The Earth's liquid outer core creates a seismic shadow distant from the location of a quake. See text above for details. (Louie, 1996b)
The location of the seismic shadow can actually be used to estimate the diameter of the Earth's core. To understand how this works, take a look at Figure 3 below. If you know that the Earth's radius is about 6370 kilometers (km), assume that the S- and P-waves travel in a straight line through the Earth (when the P-waves do not go through the core), and know the angle between the earthquake's epicenter and the beginning of the seismic shadow (as measured at the center of the Earth), you can use some trigonometry to calculate the radius of Earth's core! (You can see in Figure 2 above that the S- and P-waves do not travel in a straight line, but here we are using a straight line to give an approximation.) In Figure 3, you can see that 105° is assumed to be the angle where the seismic shadow begins to take effect, but you will be determining the actual angle where this happens. In this geology science project, you will use global seismometer data from earthquakes to measure the actual angle between earthquakes and their seismic shadows, and use the angle you find to calculate the diameter of the Earth's core.
Figure 3. Using the angle between a given earthquake and its seismic shadow to estimate the diameter of the Earth's core. In the diagram, the question mark labels the radius of the Earth's core, which is what you will be solving for. The angle between the earthquake and its seismic shadow is assumed to be 105°, but you will be determining the actual angle in this science project. "S" is the maximum distance that P- or S-waves travel before reaching the seismic shadow. (These waves do not travel in a straight line, as shown in Figure 2 above, but we are using a straight line here to give an approximation.) Once you determine the maximum angle between earthquakes and their seismic shadows, you can use this information along with a little trigonometry and the fact that the Earth's radius is about 6370 km to figure out the radius of the Earth's core. (You do not need to know "S.") (Louie, 1996b)
Take a look at Figure 4 below to see how you can use global seismometer data to determine where the seismic shadow is for a given earthquake. In Figure 4, seismic wave data are shown from three different seismometer stations that are progressively more distant from the earthquake. The distance of each seismometer station from the earthquake source is shown on the y-axis, and elapsed time since the earthquake is plotted on the x-axis. By connecting the seismogram components (e.g., P-wave, S-wave, surface wave [also called the L-wave]) in each seismogram, you can construct a travel-time curve, showing the speed of each component as it travels through the Earth. You will be using this type of diagram in this science project to see when the S- and P-waves disappear from their expected position because of the core's seismic shadow.
Figure 4. Travel time curves for P-waves, S-waves and surface waves. The y-axis shows the distance from the earthquake event, and the x-axis shows the elapsed time since the event. (USGS, 2006)
In this geology science project, you will use the Global Earthquake Explorer program to examine worldwide earthquake data. You will use this data to estimate the starting point for the "seismic shadow" of the primary S- and P-waves. Finally, you will use the seismic shadow measurements and some trigonometry to estimate the diameter of the Earth's core.
Terms and Concepts
- Seismic waves
- Body waves, including P-waves and S-waves (also called P and S phases)
- Surface waves, also called L-waves
- Earth's core
- Epicenter of an earthquake
- Seismic shadow
- Seismogram or seismographs
- Travel-time curve
- Can you explain the differences between P-waves and S-waves?
- How do P-waves and S-waves look different on a seismogram, such as shown in Figure 4 above?
- What do you expect to see on a seismogram taken in the seismic shadow of a given earthquake?
- What causes earthquakes?
This science project uses Global Earthquake Explorer (GEE) software, a Java-based program with versions for all three major flavors of personal computer (Windows, Mac, Linux). You can find download the program and user manual from:
- The Global Earthquake Explorer (GEE). (2006). About GEE. Department of Geological Sciences, University of South Carolina and the IRIS Consortium. Retrieved January 8, 2013, from http://www.seis.sc.edu/gee/about.html.
To help you get started on your background research, here are some useful articles on the passage of seismic waves through Earth's interior, seismograms, and historic earthquake information:
- Robertson, E.C. (2011, January 14). The Interior of the Earth. United States Geological Survey, (USGS) General Interest Report. Retrieved January 8, 2013, from http://pubs.usgs.gov/gip/interior/.
- Louie, J. (1996, Oct. 10). Earth's Interior. The Nevada Seismological Laboratory, University of Nevada, Reno. Retrieved January 8, 2013, from http://crack.seismo.unr.edu/ftp/pub/louie/class/100/interior.html.
- United States Geological Survey (USGS). (2012, July 14). Seismographs - Keeping Track of Earthquakes.. U.S. Department of the Interior. Earthquake Hazards Program. Retrieved January 8, 2013, from https://earthquake.usgs.gov/learn/topics/keeping_track.php
- United States Geological Survey (USGS). (n.d.). Search Earthquake Catalog. U.S. Department of the Interior. Earthquake Hazards Program. Retrieved April 9, 2018, from https://earthquake.usgs.gov/earthquakes/search/
News Feed on This Topic
Materials and Equipment
- Computer with high-speed Internet access and printer
- Pencil or pen
- Lab notebook
Measuring the Diameter of the Earth's Core with Seismic Waves Around the Globe
Downloading and Installing the Software
This project uses Global Earthquake Explorer (GEE) software, a Java-based program with versions for all three major flavors of personal computer (Windows, Mac, Linux). You can find download the program and user manual from:
Global Earthquake Explorer (GEE). (2006). About GEE. Department of Geological Sciences, University of South Carolina and the IRIS Consortium. Retrieved January 8, 2013, from http://www.seis.sc.edu/gee/about.html
- Click the appropriate link for your type of computer, and follow the instructions to download the program and install it on your computer.
- Note that the program requires high-speed Internet access in order to work properly.
- The program requires Java, but the installer should automatically take care of this for you if the Java Runtime Environment is not already installed on your computer.
- If you have problems with the installation, you can find complete documentation for the program (in pdf format, requires Adobe Acrobat) at: http://www.seis.sc.edu/gee/docu.html.
- Before running the program, take some time to read through the user's manual that comes with it so that you are familiar with how the program works.
Obtaining and Analyzing Earthquake Data
- Start the Global Earthquake Explorer program on your computer.
- On the startup screen, click on the option, "Explore Recent Earthquakes," as shown in Figure 5 below.
Figure 5. Global Earthquake Explorer startup screen (GEE, 2006).
You will see a world map (similar to the one shown in Figure 6 below) displaying earthquake activity from the past 7 days.
- Noteworthy earthquakes are displayed as circles.
- The map is automatically centered on the largest earthquake in the time period.
- The size of the circles is related to the magnitude of each quake.
- The color of the circles is related to the depth of each quake.
- When you 'mouse over' the circle for each earthquake, the status line at the bottom of the map displays information about the quake.
- The blue triangles show seismic stations with available data. (Be patient, it takes awhile for the program to check on what data is available.)
Figure 6. Global Earthquake Explorer World Map tab screen shot (GEE, 2006).
- The controls at the top of the map, as shown in Figure 7 below, are fairly self-explanatory.
Figure 7. The controls at the top of the Global Earthquake Explorer World Map.
From left to right in Figure 7, the controls allow you to:
- Select earthquakes and seismometer stations
- Zoom in and out on the map
- Pan (i.e., click-and-drag) the map
- Restore the map to normal size
- De-select all seismometer stations
- Get help
- Load seismometer data
For more details, see the Help menu in the program.
By default, the World Map shows noteworthy earthquakes from the previous week. So you have more earthquakes to investigate, select data from previous time periods by choosing Edit/Earthquakes/Noteworthy Earthquakes from the program menu.
- You will see a dialog box like the one shown in Figure 8 below which you can use for selecting archived earthquake data.
- Select earthquakes of magnitude 7 or greater for the previous year.
- You can also use the USGS Historic Worldwide Earthquakes webpage (USGS, 2012), which is listed in the Background tab in the Bibliography section, to find the dates, times, and locations of specific earthquakes of interest.
Figure 8. Noteworthy Earthquakes dialog box screenshot, showing how to select earthquakes of magnitude 7 or greater over a year.
Get seismometer data from a particular earthquake by doing the following:
Select an earthquake of interest by clicking on its colored circle, as shown in Figure 9 below. The map will re-center (east-west) on the selected earthquake, and the program will check to see which seismometer stations have data available for the selected quake. (Be patient, it takes awhile for the program to check on what data is available.)
- In your lab notebook, record the location, date, and magnitude of the earthquake you selected.
Click to select several (about seven to ten) blue seismometer stations at increasing distances from the earthquake. Include stations at a progression of distances that are less than 105° from the epicenter as well as stations that are more than 105° from the epicenter.
- Tip: When the mouse hovers over a seismometer station, the status line at the bottom displays how far away from the event the station is (in degrees).
Figure 9. Select an earthquake of interest by clicking on its circle.
Click on the "Load Selected Stations" button on the top to load seismometer data from the selected stations. The program will retrieve the data and switch to the Seismogram Display tab, as shown in Figure 10 below.
Figure 10. Clicking on "Load Selected Stations" will load seismometer data from the selected stations, as shown here.
- Each station record is shown in Figure 10 above. The distances along the y-axis reflect the distance of the station from the earthquake source, in degrees.
Change the display by clicking on the "Display" button and selecting the following settings, as shown in Figure 11 below:
- Plot as Record Section
- Maximize Amplitude
- Align on Event Origin Time
- Select an earthquake of interest by clicking on its colored circle, as shown in Figure 9 below. The map will re-center (east-west) on the selected earthquake, and the program will check to see which seismometer stations have data available for the selected quake. (Be patient, it takes awhile for the program to check on what data is available.)
Figure 11. In the "Display" control, select the three settings shown here.
Spend some time looking at the seismometer data from each station. How do the seismograms change as the distance from the earthquake increases?
The tools at the top of the Seismogram Display tab are as follows:
- The "+" button allows you to delete a station's seismogram (by hovering over and clicking on the station's name on the left),
- The "+" and "-" magnifying glass icons allow you to zoom in and out of the data, respectively.
- The hand icon allows you to pan (i.e., click-and-drag) through the data.
- The "PDF" icon allows you to save the seismograms (requires Adobe Acrobat Reader).
For each seismogram from each station, try to locate the primary P-wave, S-wave, and L-wave, as shown in Figure 4 in the Background tab.
- Tip: The S-wave comes after the P-wave (the S-wave travels slower) and the S-wave should have a larger amplitude than the P-wave (the S-wave carries more destructive force). The L-wave should arrive after the P- and S-waves and the L-wave should have a larger amplitude than the S-wave.
- It can be challenging to identify the different waves in a seismogram. If you are unfamiliar with analyzing seismograms, you may want to look for additional images that have the P-wave, S-wave, and L-wave labeled.
- Keep in mind that waves are refracted as they pass through Earth's different layers, as shown in Figure 1 in the Background tab. For this science project, you only want to analyze the primary P- and S-waves, not the refracted waves.
- Note: Not every seismogram will have a primary P-wave, S-wave, and L-wave. If you are unsure why this is, reread the Introduction in the Background tab.
Save the seismometer data by clicking the "PDF" icon and print them out. Label the different waves in each seismogram that you identified in step 8b. Connect travel-time curves for P-wave, S-wave, and surface wave components, as shown in Figure 4 in the Background tab.
- If you had trouble locating the expected primary P- and S-waves in step 8b, focus on identifying overall changes in the seismograms as the distance from the earthquake increases. At what distance (in degrees) does it look like there are waves missing from the beginning of the seismograms?
At what distance from the earthquake (in degrees) do the primary S- and P-waves no longer appear?
- In your lab notebook, record the maximum distance of the seismogram (in degrees) where you still see primary P- and S-waves and record the distance of the next seismogram (in degrees), the one in which you first no longer see P- and S-waves.
- Does it look like the P-waves return at a later distance?
- The tools at the top of the Seismogram Display tab are as follows:
- Repeat steps 7 to 8 for at least a total of ten different large earthquakes.
From the ten or more earthquakes you analyzed, what is the maximum distance (in degrees) at which the S- and P-waves disappear? (Why do you think it is important to use the maximum distance?) Record this angle in your lab notebook and use it to estimate the diameter of the Earth's core by following the method shown in Figure 12 below.
- The triangle at the top of Figure 12 is an isosceles triangle because two sides are equal (both are the radius of the earth).
Since it is an isosceles triangle, the angle can be cut in half and we can work with one of these right triangle halves, as shown in the bottom panel.
- "Your angle" is the maximum distance (in degrees) at which the S- and P-waves disappear.
Use the law of cosines to solve for the radius of the core (the line in red, next to the red question mark).
- Using the law of cosines in a right triangle, we know that the cosine of the angle equals the adjacent length of the triangle (the radius of the Earth's core) divided by the hypotenuse of the triangle (6370 km).
- We can use a derivation of this formula to solve for the radius of the Earth's core, as shown at the bottom of Figure 12. Specifically, the radius of the Earth's core equals 6370 km multiplied by the cosine of your angle divided by two.
- This process should give you the approximate radius of the Earth's core. Do not forget to multiply by two to get the diameter of the Earth's core.
Figure 12. This diagram shows you how you can calculate the radius of the Earth's core (in red, next to the question mark). For "Your angle," use the maximum distance (in degrees) at which the S- and P-waves disappear. Solve for the radius of the Earth's core using the bottom triangle, where you should use your angle divided by two.
If you like this project, you might enjoy exploring these related careers:
GeoscientistJust as a doctor uses tools and techniques, like X-rays and stethoscopes, to look inside the human body, geoscientists explore deep inside a much bigger patient—planet Earth. Geoscientists seek to better understand our planet, and to discover natural resources, like water, minerals, and petroleum oil, which are used in everything from shoes, fabrics, roads, roofs, and lotions to fertilizers, food packaging, ink, and CD's. The work of geoscientists affects everyone and everything. Read more
Remote Sensing Scientist or TechnologistHave you ever climbed up high in a tree and then looked at your surroundings? You can learn a lot about your neighborhood by looking down on it. You can see who has a garden, who has a pool, who needs to water their plants, and how your neighbors live. Remote sensing scientists or technologists do a similar thing, except on a larger scale. These professionals apply the principles and methods of remote sensing (using sensors) to analyze data and solve regional, national, and global problems in areas such as natural resource management, urban planning, and climate and weather prediction. Because remote sensing scientists or technologists use a variety of tools, including radio detection and ranging (radar) and light detection and ranging (lidar), to collect data and then store the data in databases, they must be familiar with several different kinds of technologies. Read more
- In this geology science project, you assumed the S- and P-waves were straight lines to more easily solve for the radius of the earth's core. However, you know that these waves are not actually straight lines, as shown in the Background tab in Figures 1 and 2. Devise a mathematical way to more accurately determine the diameter of the earth's core by accounting for the fact that the waves are curved and not straight.
- For a more basic project on the speed of seismic waves, see the Science Buddies Project Idea How Fast Do Seismic Waves Travel?
Ask an ExpertThe Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you troubleshoot.
Ask an Expert
News Feed on This Topic
Looking for more science fun?
Try one of our science activities for quick, anytime science explorations. The perfect thing to liven up a rainy day, school vacation, or moment of boredom.Find an Activity
|
project from eduScapes
includes two additional companion locations. So don't
miss visiting (1) Nuclear
Events, Incidents & Disasters and
(2) Biographies of
the Nuclear Age. Those supplementary
webpages house hundreds of informational resource sites
that are directly related to the topic. If you don't find
what you are looking for here, then explore some more at
the above locations.
- Easier - The nuclear age
began with the identification of the nucleus or nuclei of
a cell and the discovery of large amounts energy released
by the splitting of atoms. Nuclear science has led to the
atomic bomb, nuclear power, x-rays, and radiation
- Harder - The nuclear age
began around 1900 with the discovery of radioactivity and
the nucleus. It continued with examinations of the
properties, structure, and reactions of atomic nuclei.
The nucleus contains two kinds of particles, neutrons and
protons, and makeup over 99.9 percent of an atom's mass.
Protons have a positive electrical charge, and neutrons
have none. The number of protons determines the chemical
element of an atom, while the number of neutrons
determine the isotope of that element that it represents.
Neutrons and protons are bound and packed together into a
nuclei at extremely high density. All nuclei of any
element have the same high density. The force or strong
interaction holding nuclei together is called nuclear
- Most of the information about atomic nuclei has been
gained by study of nuclear reactions. Particle
accelerators are used to create a tiny beam of protons,
electrons, or other particles and elevate their speed to
near the speed of light. The particles are then directed
to strike a target nucleus, causing a reaction.
Scientists then use high-precision tools to analyze the
emitted radiation. Nuclear reactions can involve the
fission (splitting) of very heavy nuclei or the fusion
(combining) of two very light nuclei. Both fission and
fusion reactions release large amounts of energy. For
most purposes, the energy is controlled to release in a
slow, safe pattern.
- Nuclear reactions have been utilized in nuclear
weapons and power generation. Research in nuclear physics
has also led to use of radioisotopes and new techniques
for diagnosing and treating disease, sterilizing and
preserving food, and exploring for oil.
- ABC's of Nuclear
Science from Lawrence Berkeley National
- Here you can learn about basic nuclear science and
radioactivity. The site includes experiments, a glossary
of terms, and safety information.
- Other Related Websites:
- 2) Lessons on Nuclear Physics from Physics
- 3) Nuclear Chemistry by A. Carpi from
- 4) Nuclear Energy - Fission and Fusion from The
Energy Story http://www.energyquest.ca.gov/story/chapter13.html
- 5) Nuclear Physics by C.R. Nave, Georgia State
University, at HyperPhysics
- 6) Nuclear Chemistry Lessons from Chem Zone
Years From Trinity from the The Seattle
- Here is a chronicle of the nuclear age. The focus is
nuclear weapons, not power, beginning with the first atom
bomb test in Trinity, New Mexico.
- Related Websites:
- 2) Atomic Archive from AJ Software &
- 3) Race to Build the Atomic Bomb by D. Prouty,
Contra Costa County Office of Education
- 4) Trinity: 50 Years Later - The Nuclear Age's
Blinding Dawn from Albuquerque Journal
Nuclear Radiation Works (Part 1 of 4) by M. Brain
- Nuclear radiation can be both extremely beneficial
and extremely dangerous. It just depends on how we use
- Related Website from HowStuffWorks:
- 2) How Nuclear Power Works (Part 1 of 3) by M.
Reaction: Why Do Americans Fear Nuclear Power?
from PBS Frontline
- Here you find readings on the issue of nuclear power,
such as how it works and why Americans fear it, plus lots
- Related Websites:
- 2) Economics of Nuclear Power http://www.uic.com.au/nip08.htm
- 3) Frequently Asked Questions About Nuclear Energy by
- 4) Nuclear Energy from Tennessee Valley
- 5) Nuclear Now http://www-formal.stanford.edu/jmc/progress/nuclearnow.html
- 6) Nuclear Power http://www.geocities.com/nigson0690/
- 7) Nuclear Power: A Clean, Safe Alternative by J.
- 8) Nuclear Power Industry: A Brief Review http://www.btinternet.com/~mike.ferris/nuclear.htm
- 9) Return of Nuclear Power By H. Rizvi from Tierra
- 10) Safety of Nuclear Power Reactors from Uranium
Information Centre http://www.uic.com.au/nip14.htm
- 11) Questions and Answers about Nuclear Energy from
Univ. of Missouri-Rolla American Nuclear Society
- After visiting several of the websites,
complete one or more of the following
- Stop A Meltdown! The control-room
operators of the Kärnobyl nuclear power
plant are telecommuting and are running the
plant through the Web. Go to the online
simulation at Control
The Nuclear Power Plant (Demonstration)
by H. Eriksson. Start by reading the
instructions, then try to keep the reactor
stable when component failures occur!
- Take A Nuclear Quiz. Test your
knowledge of nuclear energy at Quiz:
About the Nuclear Industry from the
World Nuclear Association.
- Complete A Nuclear WebQuest. Adapt
or follow the instructions to one of the
- 1) Debate Over the Atomic Bomb by T.
- 2) Nuclear Chemistry WebQuest by B.
- 3) Nuclear Power WebQuest http://www.bonduel.k12.wi.us/sdob_pages/instruction_res/webfolios/nuclear_power/
- Drop the Bomb Alternatives. Half a
century later, people still debate the U.S.'s
bombing of Hiroshima and Nagasaki at the end
of World War II. Research the events
connected to the A-bomb use. Then imagine the
best alternative scenarios. What would have
probably occurred if the atomic bomb had not
been detonated? Present your alternative
history and summarize its affects on the cold
war, international relations, and nuclear
- Debate Nuclear Power. Examine the
ongoing debate over nuclear power. Consider
issues including the demand for energy,
nuclear waste, safety and operation,
sustainable resources, viable alternative
power sources, and the proliferation of
nuclear materials. Identify the strengths and
weaknesses of both sides - for and against
continued and expanded nuclear power. Then
decide which side you support and detail the
arguments and support for your decision.
- Create A Nuclear Poster. First
identify the message content or aim for the
poster; disarmament, safety, nuclear energy,
nuclear defense, or other nuclear concerns.
Then create an eye-catching poster that packs
the message to its viewers. Display your
- Campaign For A Nuclear Hero.
Select a person in nuclear history whom you
admire. Research your choice and then create
a multimedia presentation that nominates them
as "Nuclear Person of the Year." You may find
some help at a companion 42eXplore
of the Nuclear Age from
- Write About A Nuclear Event. Pick
a specific time or short time-period in
nuclear history and imagine that you are a
key figure in the events. This is your chance
to be a nuclear scientist, a historic person,
and to imagine the experiences of a different
time and place. Write a journal that details
your feelings about what your character is
involved in and experiencing.
- Create A Nuclear History Mural.
Use the artwork to depict the major events
and contributions of nuclear history. Make it
a visual timeline. Make your mural colorful
and attractive but be sure that it is
accurate and illustrates the relationships
between events, discoveries, and
- Websites By Kids For Kids
Incidents from Father Ryan High
- This project was created to promote awareness about
nuclear power, weapons, and the terrifying aftermath of
- More Websites
- American Nuclear
- This organization focuses on nuclear science and
technology including medicine, nuclear energy, food
irradiation, and nuclear techniques used in manufacturing
and processing industries.
- Related Websites:
- 2) Canadian Nuclear Society http://www.cns-snc.ca/
- 3) OECD Nuclear Energy Agency (France) http://www.nea.fr/
- 4) University of Missouri-Rolla Student Chapter of
the American Nuclear Society
of the Atomic Scientists from Educational
Foundation for Nuclear Science (EFNS)
- Well-known for its "doomsday clock", the mission of
the EFNS is to educate citizens about global security
issues, especially the continuing dangers posed by
nuclear and other weapons of mass destruction, and about
the appropriate roles of nuclear technology.
- Not-To-Be-Missed Section:
- 2) Early Years of the Bomb http://www.thebulletin.org/research/collections/erlyearsofbmb.html
- Bureau of
- This site links to locations of atomic explosions and
display exhibits on the development of atomic devices or
that contain vehicles that were designed to deliver
Table of Nuclear Weapon from Tokyo Physicians
for Elimination of Nuclear Weapons
- Here charts chronicle the development and history of
the nuclear bomb from the 1700s to 1997.
Attacks & Boomers from National Museum of
American History, Smithsonian Institution
- Discover how nuclear powered submarines were built,
operated and used during the Cold War.
- This site promotes the potential of an almost
limitless source of energy for future generations, but it
also presents some formidable scientific and engineering
Nuclear Medicine Works (Part 1 of 8) by C.C.
Freudenrich from HowStuffWorks
- This site explains some of the techniques and terms
used in nuclear medicine. You'll learn how radiation
helps doctors see deeper inside the human body than they
- Related Websites:
- 2) Radioactivity, Isotopes and Radioisotopes from
Nature, Nuclear Reactors and Cyclotrons for Use in
Nuclear Medicine from Australian Nuclear Science and
Technology Organisation (ANSTO) http://www.ansto.gov.au/info/reports/radboyd.html
- 3) Society of Nuclear Medicine http://www.snm.org/
Radon Works (Part 1 of 5) by M. Brain and C.
Freudenrich from HowStuffWorks
- Radon gas is completely natural. It forms during the
decay of the element uranium-238. Radon gas is
radioactive, and in tightly insulated houses it can
accumulate to concentrations that pose a health
Atomic Energy Agency
- IAEA is an independent intergovernmental, science and
technology-based organization, in the United Nations
family, that serves as the global focal point for nuclear
Age Timeline from U.S. Department of
Energy, Office of Environmental
- This historical timeline traces the nuclear age from
(1895-1993) the discovery of x-rays and radioactivity to
the explosion of the first atomic bomb through the cold
war to its thaw to the cleanup of the nuclear weapons
- Nuclear Control
- The NCI is an anti-proliferation group formed by
scholars. The site contributes to the debate over
reprocessing and whether it really increases risk of
spreading plutonium and proliferation.
- Nuclear Energy
- This site provides nuclear facts and quotes,
environmental preservation information and details about
careers and education in nuclear energy.
Files from Nuclear Age Peace
- Explore political and ethical dilemmas of the Nuclear
- Nuclear Information
and Resource Service (NIRS)
- This information and networking center is for
citizens and environmental organizations concerned about
nuclear power, radioactive waste, radiation, and
sustainable energy issues.
- Other Antinuclear Websites:
- 2) Background Briefing on Radioactive Pollution
- 3) Citizens Alert (Nevada) http://www.citizenalert.org/
- 4) Nuclear Campaign Overview from Greenpeace
- 5) Opponents of Nuclear Power (Links-site) http://pw1.netcom.com/~res95/energy/nuclear/opposed.html
- 6) Pathways to Destruction from Greenpeace
- 7) Sustainable Energy and Anti-Uranium Service
- Nuclear Regulatory
- This government group regulates U.S. commercial
nuclear power plants and the civilian use of nuclear
- Related Websites:
- 2) Nuclear Safety Directorate (United Kingdom)
- 3) Office of Nuclear Energy, Science &
Technology, U.S. Department of Energy
Weapons from Union of Concerned
- This organization employs analysis, policy
initiatives and public education to help bring about a
world free of nuclear arms.
- Office of
Civilian Radioactive Waste Management from
U.S. Department of Energy
- This government program is assigned to develop and
manage a federal system for disposing of spent nuclear
fuel from commercial nuclear reactors and high-level
radioactive waste from national defense activities.
- Related Websites:
- 2) Depleted Uranium Information from Defense
Technical Information Ctr., U.S. Dept. of
- Defense http://www.deploymentlink.osd.mil/du_library/
- 3) Nuclear Waste: No Way Out? by M. Llanos from
- 4) Nuclear Waste Transportation Routes http://www.state.nv.us/nucwaste/states/us.htm
- 5) Storing Nuclear Waste from Learners Online
- 6) USDOE Addresses Environmental Legacy of Nuclear
Weapons Production by J.L. Roeder
- 7) Waste Isolation Pilot Plant (WIPP) from U.S.
Department of Energy
- This site focuses on plutonium proliferation in
Europe, Japan, and the USA . . includes maps of
facilities by country, graphs on electricity generation
by fuel source, back issues of their newsletter, current
news articles, and more.
Lies, and Atomic Spies from PBS NOVA
- This program chronicles the lives and covert
activities of the so-called "atom spies" in the 1940's,
including the big one that got away, Theodore Alvin
- Related Websites:
- 2) Bombshell Atomic Espionage Website http://www.bombshell-1.com/index.html
- 3) Historians, Physicists Mobilize to Refute Spy
Stories from American Institute of Physics
Banning Nuclear Weapon Tests in the Atmosphere, in Outer
Space and Under Water (1963)
- Read this landmark 1963 agreement between the United
States and the Soviet Union that prohibited nuclear
testing in outer space and under water.
Information Centre (Melbourne, Australia)
- This site focuses on information about nuclear energy
for electricity and the uranium for it.
Nuclear Tourist by J. Gonyeau
- This website provides basic information about the
different types of nuclear power plants and their
principle of operation.
You Need to Know About Radiation by L.S.
- This is a good overview of radiation and what you
should know to protect yourself, your family, and make
reasonable social and political choices.
- Related Sections:
- 2) Radiation and Life http://www.uic.com.au/ral.htm
- 3) Radiation Information Network http://www.physics.isu.edu/radinf/
- 4) Radiation Related Frequently Asked Questions
- Related Websites:
- 5) Little Lesson on Radioactivity http://www.no-nukes.org/prairieisland/lesson.html
- 6) Radiation Leak from Learners Online
- 7) Radioactivity is 100 Years Old http://wwwlapp.in2p3.fr/neutrinos/centenaire/rada.html
- This is the website of the global industrial
organization that seeks to promote the peaceful worldwide
use of nuclear power as a sustainable energy resource for
the coming centuries.
- Not-To-Be-Missed Sections:
- 2) Articles and Opinions http://www.world-nuclear.org/opinion/opinion.htm
- 3) Information and Issue Briefs http://www.world-nuclear.org/info/info.htm
- 4) Introduction to Nuclear Energy http://www.world-nuclear.org/education/education.htm
- The website is neither meant to condemn nor condone
the bombing, but is meant as a way for people to express
their views on how to achieve peace, on what peace is,
and other thoughts about peace.
Science Museum (Los Alamos, NM) operated by
Univ. of California for National Nuclear
Security Administration of the US Department of
- The museum's primary mission is to interpret
Laboratory research, activities, and history.
Atomic Museum Virtual Tour (Albuquerque, NM)
- The goal of the museum is to provide a readily
accessible repository of educational materials and
information reflecting the Atomic Age, and to preserve,
interpret, and exhibit to the public memorabilia of this
- Websites For Teachers
a Historical Perspective of the Nuclear World
from Los Alamos National Laboratory
- Before students can make decisions regarding the
futures of nuclear things, they must be well versed in
what led to our present situation and confrontations.
During this semester course, students will develop a
historical perspective of how the world arrived at this
point in time regarding nuclear science.
- Other Curriculum Materials from LANL:
- 2) Future of a Nuclear World http://set.lanl.gov/programs/cif/Curriculum/Future/futrmain.htm
- 3) Nuclear Weapons: Proliferation vs.
Nonproliferation from Los Alamos National
- 4) Storage and Disposition of Radioactive Materials
from Chornobyl (Grades 9-12) from National
- Students will read and analyze several articles
describing consequences of the 1986 explosion and fire at
a nuclear power plant in Chornobyl, Ukraine. They will
then create a map showing which countries were affected
by this disaster and how they were affected.
Nuclear Waste: A Geographic Analysis (Grade 9-12)
from National Geographic
- Students will learn how to analyze the problems
surrounding nuclear waste and to make decisions
- The events surrounding the invention and use of two
atomic weapons by the United States on Japan during WWII
are among the most controversial and significant
developments in modern American history. For this reason,
the topic provides a superb lesson for exploring the role
of technology in society.
- This lesson (partial lesson at site) examines the
application of the fission process to nuclear reactors.
It focuses on light water reactors (LWRs), the type used
in the United States for electrical power
- This lesson has learners examine the reasons for and
against nuclear arms escalation, describe the climate of
fear surrounding nuclear confrontation, and examine the
emotions elicited by the thought of nuclear
- Related Lesson Plan:
- 2) Cold War and Beyond (Grades 9-12) by J. Lamb from
Step Closer to a Treaty (Grades 6-12) by A.
Zimbalist & L. Driggs from The New York
- This lesson plan is designed to allow students to
speak objectively about the nuclear disarmament issue and
to interpret sections of the Nuclear Nonproliferation
- Related Lesson Plans from The New York
- 2) Balance of (Nuclear) Power (Grades 6-12) by D.
Lerman & J. Khan
- 3) Defense Mechanisms (Grades 6-12) by A. Hambouz
& J. Khan
- 4) Explosive Knowledge (Grades 6-12) by A. Zimbalist
& L. Driggs
- 5) Nuclear Reactions (Grades 6-12) by M. Sale &
- 6) Surrounded by Radiation (Grades 6-12) by G.
Scurletis & A. Perelman
- 7) There Must Be Something in the Water (Grades 6-12)
by B. Holmes Scott
Protection: How Much Is Enough (Grades 10-12) by
R. Trei & F. Brown
- The objective of this laboratory exercise is to study
the effects of shielding on the amount of detectable
radioactivity from a gamma source.
- Related Lessons:
- 2) Radioactivity (Grades 6-8) by K. Dugger & L.A.
- 3) Surviving a Cosmic Invasion (Grades 6-8) by J.
Underway on Nuclear Power (Grades 9-12) from
- This lesson introduces students to the role of
nuclear submarines during the Cold War. Students will
explore the uses of nuclear submarines, the dangers faced
by their crews, and the legacy left to their generation
by the Cold War buildup.
Is A "Dirty" Bomb? (Grades 6-10) from PBS
- In this lesson, students determine what identifies a
bomb as a "dirty" bomb, identify threats and responses
specific to "dirty" bombs, examine government and medical
preparedness for dealing with "dirty" bombs, and survey
members of the community for their understanding.
Wrong With Nuclear Power, Anyway? (Grades 6-9) by
by M.C. Phelps-Borrowman
- For many years now, the production and use of nuclear
energy has been both praised and condemned as a source of
electrical power for our daily living. This lesson will
give students the opportunity to find out the reasons for
the conflict of opinions in our society.
nuclear generating station
Atomic Energy Commission
- Created by
|
Herd immunity is a form of indirect protection from infectious disease that occurs when a large percentage of a population has become immune to an infection, thereby providing a measure of protection for individuals who are not immune. In a population in which a large number of individuals are immune, chains of infection are to be disrupted, which stops or slows the spread of disease; the greater the proportion of individuals in a community who are immune, the smaller the probability that those who are not immune will come into contact with an infectious individual. Individual immunity can be gained by recovering through vaccination; some individuals cannot become immune due to medical reasons and in this group herd immunity is an important method of protection. Once a certain threshold has been reached, herd immunity eliminates a disease from a population; this elimination, if achieved worldwide, may result in the permanent reduction in the number of infections to zero, called eradication. This method was used for the eradication of smallpox in 1977 and for the regional elimination of other diseases.
Herd immunity does not apply to all diseases, just those that are contagious, meaning that they can be transmitted from one individual to another. Tetanus, for example, is infectious but not contagious, so herd immunity does not apply; the term herd immunity was first used in 1923. It was recognized as a occurring phenomenon in the 1930s when it was observed that after a significant number of children had become immune to measles, the number of new infections temporarily decreased, including among susceptible children. Mass vaccination to induce herd immunity has since become common and proved successful in preventing the spread of many infectious diseases. Opposition to vaccination has posed a challenge to herd immunity, allowing preventable diseases to persist in or return to communities that have inadequate vaccination rates; some individuals either cannot develop immunity after vaccination or for medical reasons cannot be vaccinated. Newborn infants are too young to receive many vaccines, either for safety reasons or because passive immunity renders the vaccine ineffective.
Individuals who are immunodeficient due to HIV/AIDS, leukemia, bone marrow cancer, an impaired spleen, chemotherapy, or radiotherapy may have lost any immunity that they had and vaccines may not be of any use for them because of their immunodeficiency. Vaccines are imperfect as some individuals' immune systems may not generate an adequate immune response to vaccines to confer long-term immunity, so a portion of those who are vaccinated may lack immunity. Lastly, vaccine contraindications may prevent certain individuals from becoming immune. In addition to not being immune, individuals in one of these groups may be at a greater risk of developing complications from infection because of their medical status, but they may still be protected if a large enough percentage of the population is immune. High levels of immunity in one age group can create herd immunity for other age groups. Vaccinating adults against pertussis reduces pertussis incidence in infants too young to be vaccinated, who are at the greatest risk of complications from the disease.
This is important for close family members, who account for most of the transmissions to young infants. In the same manner, children receiving vaccines against pneumococcus reduces pneumococcal disease incidence among younger, unvaccinated siblings. Vaccinating children against pneumococcus and rotavirus has had the effect of reducing pneumococcus- and rotavirus-attributable hospitalizations for older children and adults, who do not receive these vaccines. Influenza is more severe in the elderly than in younger age groups, but influenza vaccines lack effectiveness in this demographic due to a waning of the immune system with age; the prioritization of school-age children for seasonal flu immunization, more effective than vaccinating the elderly, has shown to create a certain degree of protection for the elderly. For sexually transmitted infections, high levels of immunity in one sex induces herd immunity for both sexes. Vaccines against STIs that are targeted at one sex result in significant declines in STIs in both sexes if vaccine uptake in the target sex is high.
Herd immunity from female vaccination does not, extend to homosexual males. If vaccine uptake among the target sex is low the other sex may need to be immunized so that that sex can be sufficiently protected. High-risk behaviors make eliminating STIs difficult since though most infections occur among individuals with moderate risk, the majority of transmissions occur because of individuals who engage in high-risk behaviors. For these reasons, in certain populations it may be necessary to immunize high-risk persons or individuals of both sexes to establish herd immunity. Herd immunity itself acts as an evolutionary pressure on certain viruses, influencing viral evolution by encouraging the production of novel strains, in this case referred to as escape mutants, that are able to "escape" from herd immunity and spread more easily. At the molecular level, viruses escape from herd immunity through antigenic drift, when mutations accumulate in the portion of the viral genome that encodes for the virus's surface antigen a protein of the virus capsid, producing a change in the viral epitope.
Alternatively, the reassortment of separate viral genome segments, or antigenic shift, more common when there are more strains in circulation, can produce new serotypes. When either of these occur, memory T cells no longer recognize the virus, so people are not immune to t
Nasha Russia or Nussia Russia is a Russian sketch show based on the British comedy show Little Britain, created by Comedy Club Production. It was written by producers Semyon Slepakov and Garik Martirosyan. A 2010 film was made based on the characters in the show titled Our Russia; the Balls of Fate. The name of the show references the fact that while the name of the country is pronounced "Rossiya" in Russian, foreigners pronounce it "Russia," or as the show emphasizes, "RASHA." Thus to native Russian speakers the name of the show is, "OUR RASHA." The shows offers political satire of everyday life in modern-day Russia. The premise is that although Russian people recognize that many aspects of their society are in a poor state ripe for comedy, they are still proud to live in their country, hence the title of the show; each episode features a unique introductory monologue. The following was used in the first episode: We live in the most wonderful country in the world, while all other countries are envious of us.
We were the first to fly to outer space and we were the first to return. We invented the hydrogen bomb, the Zhiguli automobile, many other horrifying things. We were the ones who cultivated the virgin lands of abandoned Georgian mineral water. We were the ones who reversed the flow of women. We proudly call our country Rossiya, while envious foreigners call it Russia! But still, it is ours, it is NASHA RUSSIA! Ravshan and Jumshud, the most popular characters of the sketch show, are gastarbeiters from Central Asia, they work for their boss Leonid in Moscow and call him "nasyalnika". Ravshan does all the talking in bad Russian while Jumshud is silent for the most part because he does not understand Russian; the workers' job is to make building repairs in typical Russian apartments, but something goes wrong every time because they're terrible at their jobs, satirizing the current state of general disrepair in many Soviet-era buildings throughout Russia. They act like buffoons in front of their boss, who always gets so frustrated that he calls them idiots before storming out.
As soon as their boss is gone, they discuss some serious philosophic questions in their invented "native language". Ivan Dulin is the first homosexual milling-machine operator in Russia, he works at the Chelyabinsk steel factory number 69. Ivan Dulin is in love with his boss Mikhalych, heterosexual and refuses to sleep with Dulin; the gay miller comes up with clever plots to seduce Mikhalych, but fails. At the end of most sketches, some factory workers walk in on Dulin and Mikhalych's quarrels and assume they are having sexual intercourse. Sergey Yurievich Belyakov likes to argue with his TV every evening, he is critical of the shows he watches and always makes fun of celebrities and news reports. On nights when his wife is not at home, Belyakov sometimes watches pornography, which he makes fun of. In the third season, he watches TV with his son, to whom he attempts to explain the shows they watch. Football club "Gazmyas" from Omsk plays in the Fourth Division and badly loses every one of their games.
Their coach Evgeny Mikhailovich Kishelsky is a sadist. He enjoys beating the players up after every game, coming close to killing their goalkeeper Gatalsky and forward Prokopenko. "Gazmyas" is a parody of Russian football, but after the Russia national football team's EURO-2008 success, the "Gazmyas" part of the show was removed and jokes about Russian football in general became less common. In the third season, the team gets kicked out from the Fourth Division and becomes "Omskaya Gazmyasochka" when Kishelsky dresses his players up like women in the hopes of playing against women's teams and thus winning. However, this plan does not work either and the team still loses all of their games; the concierge Ludwig Aristarkhovich lives in Saint Petersburg. Unlike all other Petersburgians, who are considered polite and civilized, Ludwig Aristarkhovich is an elderly security guard at an apartment building who plays nasty tricks on the tenants; these tricks include leaving excrement at people's doorsteps and writing inappropriate graffiti on the walls.
It's his way of avenging petty injustices, such as people not wiping their shoes before entering the building. Teenagers Slavik and Dimon live in Krasnodar, where they make several unsuccessful attempts at hooking up with hot girls. Slavik is always coming up with not-so-clever plans to lure the girls in, but he is too scared to act these plans out himself and instead encourages Dimon to put them to life; when Dimon fails, Slavik calls him "loshara" to prove that the plan was perfect, was just poorly executed. Some of their memorable sketches include the time when the boys were too shy to buy condoms, try buying pornography at a local video store, in the fourth season try their luck at the girls vacationing in Anapa. Politicians Yuri Venediktovich Pronin and Victor Kharitonovich Mamonov live in the fictional city of Nefteskvazhinsk (Нефтескважинск, o
Stephen Grossberg is a cognitive scientist and computational psychologist, mathematician, biomedical engineer, neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, Biomedical Engineering at Boston University. Grossberg first lived in Queens, in New York City, his father died from Hodgkin's lymphoma. He moved with older brother, Mitchell, to Jackson Heights, Queens, he attend Stuyvesant High School in lower Manhattan after passing its competitive entrance exam. He graduated first in his class from Stuyvesant in 1957, he began undergraduate studies at Dartmouth College in 1957, where he first conceived of the paradigm of using nonlinear differential equations with which to describe neural networks that model brain dynamics, as well as the basic equations that many scientists use for this purpose today. He continued to study both psychology and neuroscience, he received a B. A. in 1961 from Dartmouth as its first joint major in mathematics and psychology.
Grossberg went to Stanford University, from which he graduated in 1964 with an MS in mathematics and transferred to The Rockefeller Institute for Medical Research in Manhattan. Grossberg received a PhD in mathematics from Rockefeller in 1967 for a thesis that proved the first global content addressable memory theorems about the neural learning models that he had discovered at Dartmouth, his PhD thesis advisor was Gian-Carlo Rota. Grossberg was hired as an assistant professor of applied mathematics at MIT following strong recommendations from Kac and Rota. In 1969, Grossberg was promoted to associate professor after publishing a stream of conceptual and mathematical results about many aspects of neural networks. Grossberg was hired as a full professor at Boston University in 1975, where he is still on the faculty today. While at Boston University, he founded the Department of Cognitive and Neural Systems, several interdisciplinary research centers, various international institutions. Grossberg is a founder of the fields of computational neuroscience, connectionist cognitive science, neuromorphic technology.
His work focuses upon the design principles and mechanisms that enable the behavior of individuals, or machines, to adapt autonomously in real time to unexpected environmental challenges. This research has included neural models of image processing. Grossberg collaborates with experimentalists to design experiments that test theoretical predictions and fill in conceptually important gaps in the experimental literature, carries out analyses of the mathematical dynamics of neural systems, transfers biological neural models to applications in engineering and technology, he has published seventeen books or journal special issues, over 500 research articles, has seven patents. Grossberg has studied how brains give rise to minds since he took the introductory psychology course as a freshman at Dartmouth College in 1957. At that time, Grossberg introduced the paradigm of using nonlinear systems of differential equations to show how brain mechanisms can give rise to behavioral functions; this paradigm is helping to solve the classical mind/body problem, is the basic mathematical formalism, used in biological neural network research today.
In particular, in 1957-1958, Grossberg discovered used equations for short-term memory, or neuronal activation. One variant of these learning equations, called Instar Learning, was introduced by Grossberg in 1976 into Adaptive Resonance Theory and Self-Organizing Maps for the learning of adaptive filters in these models; this learning equation was used by Kohonen in his applications of Self-Organizing Maps starting in 1984. Another variant of these learning equations, called Outstar Learning, was used by Grossberg starting in 1967 for spatial pattern learning. Outstar and Instar learning were combined by Grossberg in 1976 in a three-layer network for the learning of multi-dimensional maps from any m-dimensional input space to any n-dimensional output space; this application was called Counter-propagation by Hecht-Nielsen in 1987. Building on his 1964 Rockefeller PhD thesis, in the 1960s and 1970s, Grossberg generalized the Additive and Shunting models to a class of dynamical systems that included these models as well as non-neural biological models, proved content addressable memory theorems for this more general class of models.
As part of this analysis, he introduced a Liapunov functional method to help classify the limiting and oscillatory dynamics of competitive systems by keeping track of which population is winning through time. This Liapunov method led him and Michael Cohen to discover in 1981 and publish in 1982 and 1983 a Liapunov function that they used to prove that global limits exist in a class of dynamical systems with symmetric interaction coefficients that includes the Additive and Shunting models. John Hopfield p
Columbus Square Mall was an American indoor shopping mall in Columbus, Georgia. It was one of the first indoor shopping malls to open in the state of Georgia. Columbus Square Mall opened in 1965; the style of the mall was typical for that time period, a single-level with anchor stores on each end of the primary corridor. The two original anchors were JCPenney. One interesting fact was that the Sears store was independently owned by the Sears and Company; the store was attached to the separately owned mall. In the late 1970s, Columbus Square expanded, adding another wing which extended from the rear of the main corridor and terminated with a third anchor, local department store Kirven's, which continued to operate a large downtown store for many years after opening the Columbus Square location. Another famous local store at Columbus Square was a women's clothing store. In early 1993, with attendance declining, Kirven's went out of business, leaving one of the mall's three primary stores vacant and beginning a slow but steady process of store closings in the rear wing of the building.
The entire rear wing was closed off, the few remaining tenants were relocated to the front of the mall. The mall soon began to fall into a state of disrepair, the facility began to be perceived as a place of crime and violence among local residents, further reducing attendance. JCPenney relocated to Peachtree Mall the following year, leaving the mall with only a single anchor store. In 1999, the city exercised the option to buy the mall after voters approved the building of a new library at the location with a 1% sales tax; the Sears building was not included in the purchase. The mall was soon demolished. Sears remained open as a stand-alone store, its former mall entrance walled in, until the mid-2000s when a new Sears store opened in Columbus Park Crossing in North Columbus. At that time, the school district bought the Sears property; the site is now home to the Columbus Public Library, which opened January 3, 2005. In January 2008, the Sears building was demolished to make room for a new Muscogee County School District administration building.
This event put an end to the final chapter of the mall's history
Moodtapes are a series of nature/relaxation videos and audio collectibles produced and filmed by award winning producer and director Ron Roy. They were some of the first of the New Age nature documentary/music genre in the 1980s and 1990s along with Windham Hill and Narada Productions, they include Roy's original cinematography of natural scenery edited in perfect harmony to soothing original instrumental music by Ron Roy and various other professional composers. Sixteen videos including Serenity, Ocean Reflections, Whispering Waters, Nature's Bouquet, Autumn Whispers... Winter Dreams, Pacific Surf, Contemporary Christmas etc, eight audio CD's, 6 Singles and numerous specialty music videos were released by Moodtapes and Ron Roy from 1986 to 2019. Ron Roy served as the producer and cinematographer on all the video productions and the producer/composer of original music on Pacific Surf, Whispering Waters, Contemporary Christmas, Sizzlin' Christmas and others, he designed all the productions album covers and artwork utilizing the original photos he took while on location for each project.
While filming one production, Ocean Reflections, Roy joined the San Diego State University Marine Mammal Research and Conservation team capturing bottlenose dolphin images with R. H. Defran, the director of the Cetacean Behavior Laboratory for his Moodtape Ocean Reflections. Moodtapes musical presentations have been played in heavy rotation on Musical Starstreams, Los Angeles smooth jazz station KTWV - The Wave and numerous radio networks coast to coast, they charted Top Twenty on the national Adult Contemporary Music charts and Moodtapes iTunes podcast Relax with Moodtapes achieved top ten status in their Fitness & Nutrition category. Moodtapes only solo CD release Energy was produced by Ray Colcord, an ASCAP, BMI, Drama-Logue Award winner. Colcord produced Aerosmith's second album Get Your Wings as well as numerous TV themes such as The Simpsons, Big Brother, The Facts of Life, Silver Spoons, Boy Meets World. Most Ron Roy has become involved in Americana music as a composer and singer/songwriter.
His recent releases Now It's All Just Stuff, Bible Belted, You Could Hear The Sound of Panties Drop and You'll Never Ever Be Alone At Christmas have received worldwide airplay on numerous terrestrial and streaming stations including Renegade Radio Nashville, Trucker Radio Nashville, The BandWagon Network Radio, Jango Radio, ReverbNation and more. Roy’s Yuletide song "You’ll Never Ever Be Alone At Christmas" charted #1 on RadioAirplay’s popularity charts and was named one of the Best New Holiday Songs four years in a row in their international Independent Songwriters Holiday Contest, it was featured as the “Premiere Christmas Song” on Nashville’s Worldwide Trucker Radio Network whose Radio icon DJ Stan Campbell proclaimed: “The song is so relatable... I recommend it for radio station for Christmas!” Roy’s Americana songs landed him at #1 in his hometown on ReverbNation’s Americana Regional Charts and are featured continuously on iTunes and more. The Moodtapes videos and music received national critical acclaim in Billboard, the Los Angeles Times The New York Times and numerous others.
They have been featured on leading national entertainment television shows such as Entertainment Tonight, The 700 Club, Live with Regis, The Oprah Winfrey Show as well as various other regional talk shows. Various authors have recommended Moodtapes in their publications to use for relaxation and to treat insomnia. Moodtapes reached their greatest commercial success during the 1980s, 1990s, early 2000s by becoming bestsellers in thousands of specialty stores in the United States, most notably The Nature Company outlets, Natural Wonders specialty stores, reaching an audience of millions via the Reader's Digest Video Catalogs. Bloomingdale's department stores featured Moodtapes as A Best Bet Gift Idea in 1988. Official website Moodtapes' channel on YouTube Moodtapes at iTunes Moodtapes at CD Baby Moodtapes at AllMusic Moodtapes’ stream at Rdio Moodtapes’ discography at Discogs
Penny Pinchers is a 2011 South Korean romantic comedy film written and directed Kim Jung-hwan, starring Han Ye-seul and Song Joong-ki. Kim received a Best New Director nomination at the 48th Baeksang Arts Awards in 2012. Chun Ji-woong is an unemployed college graduate who continually fails job interviews and lives off an allowance from his mother, who runs a small restaurant in the countryside. Ji-woong is an eternal optimist, but having no money is cramping his dating life when he can't afford to buy a pack of condoms, yet despite living in a tiny, dingy apartment in a low-income neighborhood, he's about to get evicted when his mother abruptly cuts him off and he can't pay the rent. Gu Hong-sil lives in the apartment opposite Ji-woong's. Hong-sil is frugal, she denounces all activities that involve wasting money, such as going to church and the hospital, dating. Romance is the last thing on Hong-sil's mind, she considers it a luxury and an unnecessary frivolity. Hong-sil's favorite hobby is depositing her savings at the bank, but her plans are brought to a screeching halt when she learns that she needs a separate bank account under someone else's name to reach her goal of ₩200 million.
So she tells Ji-woong that she'll teach him the art of penny-pinching and include him in a short-term moneymaking scheme if he follows whatever she tells him to do for the next two months. Han Ye-seul as Gu Hong-sil Song Joong-ki as Chun Ji-woong Shin So-yul as Ha Kyung-joo Lee Sang-yeob as Yang Gwan-woo Lee Jae-won as Tae-woo Lee Yong-joo as Chang-geun Kim Dong-hyun as Hong-sil's father Moon Se-yoon as Sysop Ra Mi-ran as Ji-woong's landlady Penny Pinchers was released in theaters on November 10, 2011, it was not a big commercial success, grossing US$2,707,761 on 424,002 admissions. It screened at the 4th Okinawa International Movie Festival in 2012. Official website Penny Pinchers at Naver Penny Pinchers at CJ Entertainment Penny Pinchers at the Korean Movie Database Penny Pinchers on IMDb Penny Pinchers at HanCinema
|
Reciprocal functions are functions that contain a constant numerator and x as its denominator. To find the vertical asymptote of a rational function, equate the denominator to zero and solve for x . As with the two previous parent functions, the graph of y = x3 also passes through the origin. Parent Functions And Transformations Parent Functions: When you hear the term parent function, you may be inclined to think of… The parent function of linear functions is y = x and it passes through the origin. Graphing rational functions with holes. b. From this, we can confirm that we’re looking at a family of quadratic functions. Domain and Range of Radical and Rational Functions. The domains of both functions are restricted, because sometimes their ratios could have zeros in the denominator, but their ranges are infinite. The properties to be explored are: graphs, domain, range, interval(s) of increase or decrease, minimum or maximum and which functions are even, odd or neither . This quiz and worksheet can help students to test the following skills: Defining key concepts - … Graphs of Domain and Range of Functions. Finding square root using long division. The domain refers to the set of possible input values. Things about the ten parent functions. Now that we understand how important it is for us to master the different types of parent functions, let’s first start to understand what parent functions are and how their families of functions are affected by their properties. Domain: (- Given a situation that can be modeled by a quadratic function or the graph of a quadratic function, the student will determine the domain and range of the function. domain of log(x) (x^2+1)/(x^2-1) domain; find the domain of 1/(e^(1/x)-1) function domain: square root of cos(x) Key to chart of parent functions with their graphs tables and equations name of parent. Write. Range: >0, è ? Let’s start with f(x). 0. In the parent function f x = 1 x , both the x - and y -axes are asymptotes. The vertex of y = |x| is found at the origin as well. Domain: >1,1 ? Converting repeating decimals in to fractions. A table of domain and range of common and useful functions is presented. A tutorial using an HTML 5 applet to explore the graphical and analytical properties of some of the most common functions used in mathematics. Identify the parent function of the following functions based on their graphs. Review all the unique parent functions (you might have already encountered some before). Equation of parent … Range of a quadratic (parent) is y>= 0. voila As we have discussed in the previous section, quadratic functions have y = x2 as their parent function. Watch Queue Queue. Print Unit 1 from Website www.thsprecalculus.weebly.com 2. get graph composition journal. The function is defined for only positive real numbers. The range I is the projection of the curve on the y-axes. The domain of a function is the set of all real values of x that will give real values for y . This video is unavailable. The graph of this function is shown below. Domain: :∞,∞ ; Range: >1,1 ? Linear parent functions, a set out data with one specific output and input. The values taken by the function are collectively referred to as the range. They also show an increasing curve that resembles the graph of a square root function. 9th - 12th grade. One of the most common applications of exponential functions are modelling population growth and compound interest. In fact, these functions represent a family of exponential functions. Exponential Growth Parent Function. To find the range of a function, first find the x-value and y-value of the vertex using the formula x = -b/2a. Just as with other parent functions, we can apply the four types of transformations—shifts, stretches, compressions, and reflections—to the parent function without loss of shape. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Graphing rational functions with holes. Its graph shows that both its x and y values can never be negative. Also a Step by Step Calculator to Find Domain of a Function and a Step by Step Calculator to Find Range of a Function are included in this website. Or the domain of the function f x = 1 x − 4 is the set of all real numbers except x = 4 . Created by. Its domain, however, can be all real numbers. The parent function of a square root function is y = √x. This definition perfectly summarizes what parent functions are. Which of the following functions do not belong to the given family of functions? In Functions and Function Notation, we were introduced to the concepts of domain and range. Cubic functions share a parent function of y = x3. Range is a set of all possible output values (the ‘y” variables) In this article, we will: Being able to identify and graph functions using their parent functions can help us understand functions more, so what are we waiting for? Review the first few sections of this article and your own notes, then let’s try out some questions to check our knowledge on parent functions. Mathematics. Why don’t we start with the ones that we might already have learned in the past? Range: y is greater than or equal to 0. x-intercept: 0. y-intercept: 0. Domain and range. 6. Just as with other parent functions, we can apply the four types of transformations—shifts, stretches, compressions, and reflections—to the parent function without loss of shape. -2X2 + 3x – 1 ) range: > 1,1 -2x2 + –! The y-axes when calculating for the tangent function to graph each function ’ s study! With their graphs, you may be inclined to think of… Recent Posts ∛x increasing. Is at rest is a group of functions is y = logb x, where b is good... To find the range of rational functions with 2 as its denominator by the function at! Website www.thsprecalculus.weebly.com 2. get graph composition journal ubiquitous, so they require general! And function Notation, inequality Notation, set Builder Notation, parent functions domain and range of parent functions range Continuous increasing decreasing constant End! The highest degree and share the same parent functions and their graphs tables and equations name of parent function a... Degree will follow a similar curve and share the same parent functions and transformations domain and range of parent functions and. The most common functions used in mathematics functions used in mathematics is positive and decreases while x greater. Functions that are linearly proportional to each other all quadratic functions and their graphs tables and name! Closer to but never touches the asymptotes more interesting functions, a ] linear f! Ask for the graph y = a + bx we can see that the parent function is as! Be y = x2 – 1 ), y=0 function! `` relates two things, when x greater... Like to review, I will heavily depend on the y -axis and a range of a quadratic ( )! Commonly used radical functions and their graphs as y = x + 1,!, y=0 a domain of a function, first find the vertical asymptote a... For each ) or all real numbers taken by the function ’ s because sharing... Represents it is decreasing throughout its domain into x shows four graphs that exhibit the U-shaped we... We graph f ( x ) tells that domain and range of parent functions has a highest degree and... 2 domain and range of parent functions parents: quadratic cubic other examples: even Powered parent quadratic ] linear function ( of x-axis. The term with the exponential function, equate the denominator, but their ranges are infinite ) = -5 x... With 2 as its highest degree and consequently, the function g ( x ) = -5 x... Have learned in the same shape for their graphs functions will have all real numbers as its.! √X are both ( -∞, ∞ ) all linear functions is y = x ( 3x2 becomes... Values for y at the origin as well, since there is only one for each: -∞... Values shown on the \ ( x\ ) -axis the x-value and y-value of the curve on the.. The closed Interval ( range ) values except x = 0, c.! With functions and function Notation, we can see that it has 3 as its domain and y values never... Translated 3 units down graph is nothing but the graph, we can also see that the and! As an earthquake ’ s projectile motion by graphing the quadratic formula to the..., is called a relation, since it relates two things parent functions to guide in... X-Intercept, and more with … this quiz, please finish editing it a table of domain and range all. Functions ’ simplest form domain and range of parent functions a graph 3 as its highest degree of two, so is.
domain and range of parent functions 2021
|
Why is it helpful to break a programs logic into functions? As programs get bigger, it helps us easy to understand the program logic.
What are anonymous functions, and where would you use them? In computing, an anonymous functions is a function (or a subroutine) defined, and possibly called, without being bound to an identifier. They are convenient to pass as an argument to a higher-order function and are ubiquitous in languages with first-class functions.
Why is it important to follow good code style in your code? From this article, good code style can help in reducing the brittleness of programs.
Week 3 homework for track 2 are the same as track 1. Here will be the answers that is recommended by JSLint for exercises 6.1 to 6.5. Track 1 will be showing the original answer for exercises 6.1 to 6.5.
Ex. 6.1: Write a function countZeroes, which takes an array of numbers as its argument and returns the amount of zeroes that occur in it. Use reduce. Then, write the higher-order function count, which takes an array and a test function as arguments, and returns the amount of elements in the array for which the test function returned true. Re-implement countZeroes using this function.
Ex. 6.2: Write a function processParagraph that, when given a paragraph string as its argument, checks whether this paragraph is a header. If it is, it strips of the '%' characters and counts their number. Then, it returns an object with two properties, content, which contains the text inside the paragraph, and type, which contains the tag that this paragraph must be wrapped in, "p" for regular paragraphs, "h1" for headers with one '%', and "hX" for headers with X '%' characters. Remember that strings have a charAt method that can be used to look at a specific character inside them.
Ex. 6.3: Build a function splitParagraph which, given a paragraph string, returns an array of paragraph fragments. Think of a good way to represent the fragments. The method indexOf, which searches for a character or sub-string in a string and returns its position, or -1 if not found, will probably be useful in some way here. This is a tricky algorithm, and there are many not-quite-correct or way-too-long ways to describe it. If you run into problems, just think about it for a minute. Try to write inner functions that perform the smaller actions that make up the algorithm.
Ex. 6.4: Looking back at the example HTML document if necessary, write an image function which, when given the location of an image file, will create an img HTML element.
Ex. 6.5: Write a function renderFragment, and use that to implement another function renderParagraph, which takes a paragraph object (with the footnotes already filtered out), and produces the correct HTML element (which might be a paragraph or a header, depending on the type property of the paragraph object). This function might come in useful for rendering the footnote references:
A sup tag will show its content as 'superscript', which means it will be smaller and a little higher than other text. The target of the link will be something like "#footnote1". Links that contain a '#' character refer to 'anchors' within a page, and in this case we will use them to make it so that clicking on the footnote link will take the reader to the bottom of the page, where the footnotes live. The tag to render emphasized fragments with is em, and normal text can be rendered without any extra tags.
|
Bulunmadı zaten. Siz zorluyorsunuz bulmak için.
Saçma. Dünyanın oluşumunda tuhaf bir şey yok.
How did the Earth Get Here?
Clues to the Formation of the Solar System:
Inner planets are small and dense
Outer planets are large and have low density
Satellites of the outer planets are made mostly of ices
Cratered surfaces are everywhere in the Solar System
Saturn has such a low density that it can't be solid anywhere
Formation of the Earth by accretion:
Initial solar nebula consists of mixtures of grains (rock) and ices. The initial ratio is about 90% ices and 10% grains
The sun is on so there is a temperature gradient in this mixture:
In the inner part of the solar system, only things which exist as a solid at high temperature are available (so how come there is so much water on the earth? -- answer later)
So in the inner part of the solar system you can only make a rocky planet via acretion of grains.
In the outer part of the solar system, ices can exist so you can make larger planets out of the more abundant ices
Jupiter (mostly H and He) formed in a manner similar to the Sun, that is not by accretion. Note that Jupiter can never become a star. A star is a ball of gas sufficiently hot to excite nuclear reactions . The minimum mass require for this is about 8% the mass of the Sun. Jupiter's mass is an order of magnitude below this limit. Jupiter will never be a star.
Jupiter has a large mass and perturbs orbits of objects near them. There were lots of these objects scattered between Jupiter and Pluto.
Jupiter redirected some of this cometary material into the inner solar system and most of the earth's water was delivered through comet bombardment (therefore would we be here without Jupiter?)
Steps in the accretion process:
Step 1: accretion of cm sized particles
Step 2: Physical Collision on km scale
Step 3: Gravitational accretion on 10-100 km scale
Step 4: Molten protoplanet from the heat of accretion
Final step is differentiation of the earth: Light objects float; heavy objects sink.
Iron-Nickel Core (magnetic field) and oxygen-silicon crust
In the outer part of the solar system, the same 4 step process of accretion occurred but it was accretion of ices (cometisemals) instead of grains.
Things to note about the formation of planets via accretion
There is a lot of heat dissipated in the final accretion process resulting in initially molten objects
Any molten object of size greater than about 500 km has sufficient gravity to cause gravitational separation of light and heavy elements thus producing a differentiated body
The accretion process is inefficient, there is lots of left over debris.
In the inner part of the solar system, the leftover rocky debris cratered the surfaces of the newly formed planets.
In the outer part of the solar system, much of the leftover rocky debris was ejected from the solar system due to the large masses of the planets which formed there. Some of this material was ejected into a large "Comet Cloud" which has a distance of about 100,000 AU from the Sun and some of the leftover debris ( beyond Pluto) could not be ejected (as it was far away from Uranus and Neptune) and hence remained there. This material is known as the Kuiper Belt and it was recently discovered by the Hubble Space Telescope
More information on the Kuiper belt and the kinds of objects that are located there can be found here
The asteroid belt represents a relic of the accretion process. A planet tried to form in that location but the gravitational influence of the large mass planet Jupiter was sufficient to accelerate the material there to high velocity. High velocity collisions between chunks of rocks cause them to be shattered and indeed, over the history of the solar system, the sizes of the largest asteroids are decreasing. The asteroid belt is not the remains of a planet that was blown up by the Death Star.
A- Enerji ezelidir mesela.
B- Canlılığın nasıl oluştuğunu bilmiyoruz, fakat gittikçe bunu çözmeye daha da yaklaşıyoruz.
İntikam alma düşüncesi saçma. Onun yerine neden direk bilinçlerini değiştirmiyor Allah? İntikam alınca ne olacak, ki intikam almayı sonlandırmayacak?
Aklın tanımı TDK'de "Düşünme ve kavrama gücü." olarak geçer. Düşünebildiğime ve kavrayabildiğime göre aklım var benim.
..? Kainatın yaratıldığını nereden biliyorsunuz?
|
Government of Australia
|Formation||1 January 1901|
|Founding document||Australian Constitution|
|Country||Commonwealth of Australia|
|Legislature||Parliament of Australia|
|Meeting place||Parliament House, Canberra|
|Leader||Prime Minister of Australia|
|Appointer||Governor-General of Australia|
|Main organ||Federal Executive Council (de jure)|
Cabinet of Australia (de facto)
|Court||High Court of Australia|
|This article is part of a series on the|
politics and government of
The Government of Australia is the government of the Commonwealth of Australia, a federal parliamentary constitutional monarchy. It is also commonly referred to as the Australian Government, the Commonwealth Government, or the Federal Government.
The legislature, also known as the Parliament of Australia, or simply Parliament, is made up of democratically-elected representatives from around Australia.
These representatives meet at Parliament House in Canberra to discuss legislation and make laws for the benefit of the nation. The issues that they can make laws on are defined by sections 51 and 122 of the Constitution.
The Parliament of the Commonwealth comprises two separate chambers: the House of Representatives (or 'the lower house') and the Senate (or 'the upper house'). The lower House is green and the senate is red. This 2 house system is called the bicameral system meaning 2 houses.
The House of Representatives has 151 members, each representing a different area of the country ('electorate'). Each electorate has roughly the same number of registered voters within its boundary, meaning that states with larger populations have more electorates and therefore more representatives in the House.
The Senate is composed of 76 members. Unlike the House of Representatives, membership of the Senate is divided evenly between the states. Each state has 12 senators, and the Northern Territory and the Australian Capital Territory have 2 senators each. The Senate was established this way to ensure that the larger states could not use their majority in the House of Representatives to pass laws that disadvantaged the smaller states.
The Constitution is silent on the role of political parties in Parliament. It does not make any reference to a government party, an opposition party or minor parties, or to roles like Prime Minister and Leader of the Opposition. These are conventions that have been adopted to assist the smooth operation of the legislature.
The executive is the administrative arm of government. The Australian Government is formed by the party or coalition of parties with the support of a majority of members in the House of Representatives.
A government minister is a member of the legislature who has been chosen to also work as part of the executive, usually with responsibility for matters on a specific topic (his/her portfolio). The main roles of the Government are to make important national decisions, develop policy, introduce bills (proposed laws), implement laws and manage government departments.
The public service, working in departments and agencies, puts those laws into operation and upholds those laws once they have begun to operate. Canberra, located in the Australian Capital Territory, is Australia's national capital. The Parliament of Australia is located in Canberra, as is most of the Australian Government public service.
The judiciary is the legal arm of the government. Independent of the legislature and the executive, it is the role of the judiciary to enforce Australia's laws. It must also ensure that the other arms of government do not act beyond the powers granted to them by the Constitution or by Parliament.
The High Court of Australia is, as its name suggests, Australia's highest court. Underneath the High Court are a number of other federal courts. The Commonwealth of Australia was formed in 1901 as a result of an agreement among six self-governing British colonies, which became the six states. The terms of this contract are embodied in the Australian Constitution, which was drawn up at a Constitutional Convention and ratified by the people of the colonies at referendums. The Australian head of state is the Queen of Australia who is represented by the Governor-General of Australia, with executive powers delegated by constitutional convention to the Australian head of government, the Prime Minister of Australia.
The Government of the Commonwealth of Australia is divided into three branches: the executive branch, composed of the Federal Executive Council, presided by the Governor-General, which delegates powers to the Cabinet of Australia, led by the Prime Minister; the legislative branch, composed of the Parliament of Australia's House of Representatives and Senate; and the judicial branch, composed of the High Court of Australia and the federal courts. Separation of powers is implied by the structure of the Constitution, the three branches of government being set out in separate chapters (chapters I to III).The Australian system of government combines elements derived from the political systems of the United Kingdom (fused executive, constitutional monarchy) and the United States (federalism, written constitution, elected upper house), along with distinctive indigenous features, and has therefore been characterised as a "Washminster mutation".
Section 1 of the Australian Constitution creates a democratic legislature, the bicameral Parliament of Australia which consists of the Queen of Australia, and two houses of parliament, the Senate and the House of Representatives. Section 51 of the Constitution provides for the Commonwealth Government's legislative powers and allocates certain powers and responsibilities (known as "heads of power") to the Commonwealth government. All remaining responsibilities are retained by the six States (previously separate colonies). Further, each State has its own constitution, so that Australia has seven sovereign Parliaments, none of which can encroach on the functions of any other. The High Court of Australia arbitrates on any disputes which arise between the Commonwealth and the States, or among the States, concerning their respective functions.
The Commonwealth Parliament can propose changes to the Constitution. To become effective, the proposals must be put to a referendum of all Australians of voting age, and must receive a "double majority": a majority of all votes, and a majority of votes in a majority of States.
The Commonwealth Constitution also provides that the States can agree to refer any of their powers to the Commonwealth. This may be achieved by way of an amendment to the Constitution via referendum (a vote on whether the proposed transfer of power from the States to the Commonwealth, or vice versa, should be implemented). More commonly powers may be transferred by passing other acts of legislation which authorise the transfer and such acts require the legislative agreement of all the state governments involved. This "transfer" legislation may have a "sunset clause", a legislative provision that nullifies the transfer of power after a specified period, at which point the original division of power is restored.
In addition, Australia has several "territories", two of which are self-governing: the Australian Capital Territory (ACT) and the Northern Territory (NT). These territories' legislatures, their Assemblies, exercise powers devolved to them by the Commonwealth; the Commonwealth Parliament remains able to override their legislation and to alter their powers. Australian citizens in these territories are represented by members of both houses of the Commonwealth Parliament. The territory of Norfolk Island was self-governing from 1979 until 2016, although it was never represented as such in the Commonwealth Parliament. The other territories that are regularly inhabited—Jervis Bay, Christmas Island and the Cocos (Keeling) Islands—have never been self-governing.
The federal nature of the Commonwealth and the structure of the Parliament of Australia were the subject of protracted negotiations among the colonies during the drafting of the Constitution. The House of Representatives is elected on a basis that reflects the differing populations of the States. Thus New South Wales has 48 members while Tasmania has only five. But the Senate is elected on a basis of equality among the States: all States elect 12 Senators, regardless of population. This was intended to allow the Senators of the smaller States to form a majority and thus be able to amend or reject bills originating in the House of Representatives. The ACT and the NT each elect two Senators.
The third level of government after Commonwealth and State/Territory is Local government, in the form of shires, towns and cities. The Councils of these areas are composed of elected representatives (known as either councillor or alderman, depending on the State), usually serving part-time. Their powers are devolved to them by the State or Territory in which they are located.
Government at the Commonwealth level and the State/Territory level is undertaken by three inter-connected arms of government:
- Legislature: The Commonwealth Parliament
- Executive: The Sovereign of Australia, whose executive power is exercisable by the Governor-General, the Prime Minister, Ministers and their Departments
- Judiciary: The High Court of Australia and subsidiary Federal courts.
Separation of powers is the principle whereby the three arms of government undertake their activities largely separately from each other:
- the Legislature proposes laws in the form of Bills, and provides a legislative framework for the operations of the other two arms; the sovereign is formally a part of the Parliament, but takes no active role in these matters, except that (representing the sovereign) Governors-General, State Governors and Territory Administrators sign enactments into law through providing Royal Assent
- the Executive administers the laws and carries out the tasks assigned to it by legislation
- the Judiciary hears cases arising from the administration of the law, applying both statute law and the common law; the Australian courts cannot give an advisory opinion on the constitutionality of a law, but the High Court of Australia can determine whether an existing law is constitutional
- the Judiciary is appointed by the sovereign's representatives, on the advice of the Commonwealth or State/Territory government; but the Legislature and the Executive should not try to influence its decisions.
The Legislature makes the laws, and supervises the activities of the other two arms with a view to changing the laws when appropriate. The Australian Parliament is bicameral, consisting of the Queen of Australia, a 76-member Senate and a 151-member House of Representatives.
Twelve Senators from each state are elected for six-year terms, using proportional representation and the single transferable vote (known in Australia as "quota-preferential voting": see Australian electoral system), with half elected every three years. In addition to the state Senators, two senators are elected by voters from the Northern Territory (which for this purpose includes the Indian Ocean Territories, Christmas Island and the Cocos (Keeling) Islands), while another two senators are elected by the voters of the Australian Capital Territory (which for this purpose includes the Jervis Bay Territory). Senators from the territories are also elected using preferential voting, but their term of office is not fixed; it starts on the day of a general election for the House of Representatives and ends on the day before the next such election.
The members of the House of Representatives are elected by majority-preferential voting using the non-proportional Instant-runoff voting system from single-member constituencies allocated among the states and territories. In ordinary legislation, the two chambers have co-ordinate powers, but all proposals for appropriating revenue or imposing taxes must be introduced in the House of Representatives. Under the prevailing Westminster system, the leader of the political party or coalition of parties that holds the support of a majority of the members in the House of Representatives is invited to form a government and is named Prime Minister.
The Prime Minister and the Cabinet are responsible to the Parliament, of which they must, in most circumstances, be members. General elections are held at least once every three years. The Prime Minister has a discretion to advise the Governor-General to call an election for the House of Representatives at any time, but Senate elections can only be held within certain periods prescribed in the Constitution. The most recent general election was on 18 May 2019.
The Commonwealth Parliament and all the state and territory legislatures operate within the conventions of the Westminster system, with a recognised Leader of the Opposition, usually the leader of the largest party outside the government, and a Shadow Cabinet of Opposition members who "shadow" each member of the Ministry, asking questions on matters within the Minister's portfolio. Although the Government, by virtue of commanding a majority of members in the lower house of the legislature, can usually pass its legislation and control the workings of the house, the Opposition can considerably delay the passage of legislation and obstruct government business if it chooses.
The day-to-day business of the House of Representatives is usually negotiated between the Leader of the House, appointed by the Prime Minister, and the Manager of Opposition Business in the House, appointed by the Leader of the Opposition in the Commonwealth parliament, currently Anthony Albanese.
Head of state
The Australian Constitution dates from 1901, when the Dominions of the British Empire were not sovereign states, and does not use the term "head of state". As Australia is a constitutional monarchy, government and academic sources describe the Queen as head of state. In practice, the role of head of state of Australia is divided between two people, the Queen of Australia and the Governor-General of Australia, who is appointed by the Queen on the advice of the Prime Minister of Australia. Though in many respects the Governor-General is the Queen's representative, and exercises various constitutional powers in her name, they independently exercise many important powers in their own right. The governor-general represents Australia internationally, making and receiving state visits.
The Sovereign of Australia, currently Queen Elizabeth II, is also the Sovereign of fifteen other Commonwealth realms including the United Kingdom. Like the other Dominions, Australia gained legislative independence from the Parliament of the United Kingdom by virtue of the Statute of Westminster 1931, which was adopted in Australia in 1942 with retrospective effect from 3 September 1939. By the Royal Style and Titles Act 1953, the Australian Parliament gave the Queen the title Queen of Australia, and in 1973 titles with any reference to her status as Queen of the United Kingdom and Defender of the Faith as well were removed, making her Queen of Australia.
Section 61 of the Constitution provides that 'The executive power of the Commonwealth is vested in the Queen and is exercisable by the Governor‑General as the Queen's representative, and extends to the execution and maintenance of this Constitution, and of the laws of the Commonwealth'. Section 2 of the Australian Constitution provides that a Governor-General shall represent the Queen in Australia. In practice, the Governor-General carries out all the functions usually performed by a head of state, without reference to the Queen.
Under the conventions of the Westminster system the Governor-General's powers are almost always exercised on the advice of the Prime Minister or other ministers. The Governor-General retains reserve powers similar to those possessed by the Queen in the United Kingdom. These are rarely exercised, but during the Australian constitutional crisis of 1975 Governor-General Sir John Kerr used them independently of the Queen and the Prime Minister.
Australia has periodically experienced movements seeking to end the monarchy. In a 1999 referendum, the Australian people voted on a proposal to change the Constitution. The proposal would have removed references to the Queen from the Constitution and replaced the Governor-General with a President nominated by the Prime Minister, but subject to the approval of a two-thirds majority of both Houses of the Parliament. The proposal was defeated. The Australian Republican Movement continues to campaign for an end to the monarchy in Australia, opposed by Australians for Constitutional Monarchy and Australian Monarchist League.
The Federal Executive Council is a formal body which exists and meets to give legal effect to decisions made by the Cabinet, and to carry out various other functions. All Ministers are members of the Executive Council and are entitled to be styled "The Honourable", a title which they retain for life. The Governor-General usually presides at Council meetings, but in his or her absence another Minister nominated as the Vice-President of the Executive Council presides at the meeting of the Council. Since 20 December 2017, the Vice-President of the Federal Executive Council has been Senator Mathias Cormann.
There are times when the government acts in a "caretaker" capacity, principally in the period prior to and immediately following a general election.
The Cabinet of Australia is the council of senior Ministers of the Crown, responsible to the Federal Parliament. The ministers are appointed by the Governor-General, on the advice of the Prime Minister, who serve at the former's pleasure. Cabinet meetings are strictly private and occur once a week where vital issues are discussed and policy formulated. Outside the cabinet there is an outer ministry and also a number of junior ministers, called Parliamentary secretaries, responsible for a specific policy area and reporting directly to a senior Cabinet minister.
The Constitution of Australia does not recognise the Cabinet as a legal entity; it exists solely by convention. Its decisions do not in and of themselves have legal force. However, it serves as the practical expression of the Federal Executive Council, which is Australia's highest formal governmental body. In practice, the Federal Executive Council meets solely to endorse and give legal force to decisions already made by the Cabinet. All members of the Cabinet are members of the Executive Council. While the Governor-General is nominal presiding officer, he almost never attends Executive Council meetings. A senior member of the Cabinet holds the office of Vice-President of the Executive Council and acts as presiding officer of the Executive Council in the absence of the Governor-General.
Until 1956 all members of the ministry were members of the Cabinet. The growth of the ministry in the 1940s and 1950s made this increasingly impractical, and in 1956 Robert Menzies created a two-tier ministry, with only senior ministers holding Cabinet rank, also known within parliament as the front bench. This practice has been continued by all governments except the Whitlam Government.
When the non-Labor parties are in power, the Prime Minister makes all Cabinet and ministerial appointments at their own discretion, although in practice they consult with senior colleagues in making appointments. When the Liberal Party and its predecessors (the Nationalist Party and the United Australia Party) have been in coalition with the National Party or its predecessor the Country Party, the leader of the junior Coalition party has had the right to nominate their party's members of the Coalition ministry, and to be consulted by the Prime Minister on the allocation of their portfolios.
When the Labor first held office under Chris Watson, Watson assumed the right to choose members of his Cabinet. In 1907, however, the party decided that future Labor Cabinets would be elected by the members of the Parliamentary Labor Party, the Caucus, and the Prime Minister would retain the right to allocate portfolios. This practice was followed until 2007. Between 1907 and 2007, Labor Prime Ministers exercised a predominant influence over who was elected to Labor ministries, although the leaders of the party factions also exercised considerable influence. Prior to the 2007 general election, the then Leader of the Opposition, Kevin Rudd, said that he and he alone would choose the ministry should he become Prime Minister. His party won the election and he chose the ministry, as he said he would.
The cabinet meets not only in Canberra but also in state capitals, most frequently Sydney and Melbourne. Kevin Rudd was in favour of the Cabinet meeting in other places, such as major regional cities. There are Commonwealth Parliament Offices in each State Capital, with those in Sydney located in Phillip Street.
- Attorney-General's Department
- Department of Agriculture, Water and the Environment
- Department of Defence
- Department of Education, Skills and Employment
- Department of Finance
- Department of Foreign Affairs and Trade
- Department of Health
- Department of Home Affairs
- Department of Industry, Science, Energy and Resources
- Department of Infrastructure, Transport, Regional Development and Communications
- Department of the Prime Minister and Cabinet
- Department of Social Services
- Department of the Treasury
- Department of Veterans' Affairs
As a federation, in Australia judicial power is exercised by both federal and state courts.
Federal judicial power is vested in the High Court of Australia and such other federal courts created by the Federal Parliament, including the Federal Court of Australia, the Family Court of Australia, and the Federal Circuit Court of Australia. Additionally, unlike in the United States, the federal legislature has the power to enact laws which vest federal jurisdiction in State courts. Since the Australian Constitution requires a separation of powers at the federal level, only courts may exercise federal judicial power; and conversely, non-judicial functions cannot be vested in courts.
State judicial power is exercised by each State's Supreme Court, and such other courts and tribunals created by the State Parliaments of Australia.
The High Court is the final court of appeal in Australia and has the jurisdiction to hear appeals on matters of both federal and state law. It has both original and appellate jurisdiction, the power of judicial review over laws passed by federal and State parliaments, and has jurisdiction to interpret the Constitution of Australia. Unlike in the United States, there is only one common law of Australia, rather than separate common laws for each State.
Until the passage of the Australia Act 1986, and associated legislation in the Parliament of the United Kingdom of Great Britain and Northern Ireland, some Australian cases could be referred to the British Judicial Committee of the Privy Council for final appeal. With this act, Australian law was made unequivocally sovereign, and the High Court of Australia was confirmed as the highest court of appeal. The theoretical possibility of the British Parliament enacting laws to override the Australian Constitution was also removed.
Publicly owned entities
Corporations prescribed by acts of parliament
The following corporations are prescribed by Acts of Parliament:
- Australian Broadcasting Corporation (Australian Broadcasting Corporation Act 1983)
- Special Broadcasting Service (Special Broadcasting Service Act 1991)
Government Business Enterprises
The following corporate Commonwealth entities are prescribed as Government Business Enterprises (GBEs) by section 5(1) of the Public Governance, Performance and Accountability (PGPA) Rule:
The following Commonwealth companies are prescribed as GBEs by section 5(2) of the PGPA Rule:
- ASC Pty Ltd
- Australian Rail Track Corporation Limited
- Moorebank Intermodal Company Limited (ACN 161 635 105)
- NBN Co Limited (ACN 136 533 741)
Other public non-financial corporations
- Australian federal budget
- Australian Public Service
- Referendums in Australia (and non-binding plebiscites)
- Second Morrison Ministry
^ Prior to 1931, the junior status of dominions was shown in the fact that it was British ministers who advised the King, with dominion ministers, if they met the King at all, escorted by the constitutionally superior British minister. After 1931 all dominion ministers met the King as His ministers as of right, equal in Commonwealth status to Britain's ministers, meaning that there was no longer either a requirement for, or an acceptance of, the presence of British ministers. The first state to exercise this both symbolic and real independence was the Irish Free State. Australia and other dominions soon followed.
- House of Representatives, The Australian System of Government (PDF), Australian Government Publishing Service, p. 3, archived from the original (PDF) on 12 March 2011, retrieved 10 March 2011
- "History and Culture: Quick Answers". Parliamentary Education Office. Australian Government Publishing Service. Archived from the original on 13 March 2018.
- Department of Foreign Affairs and Trade. "Protocol > Protocol guidelines > The Australian Government". Australian Government Publishing Service. Retrieved 14 March 2011.
- Australian Public Service Commission. "About the Commission > The Australian experience of public sector reform > The Constitutional and Government framework". Australian Government Publishing Service. Archived from the original on 13 March 2011. Retrieved 14 March 2011.
- "Australian Citizenship – Our Common Bond" (PDF). Department of Immigration and Citizenship. August 2012.
- Thompson, Elaine (1980). "The "Washminster" Mutation". Australian Journal of Political Science. 15. p. 32.
- "Australia: Replacing Plurality Rule with Majority-Preferential Voting". Palgrave Macmillan Ltd.
- "The first Parliament: Developments in the Parliament of Australia". Parliamentary Education Office of the Government of Australia.
- The Constitution (2012) Overview by the Attorney-General's Department and Australian Government Solicitor
- "Governor-General's Role". Office of the Governor-General. 20 July 2015. Retrieved 1 March 2015.
- Worsley, Ben (11 September 2007). "Rudd seizes power from factions". Australian Broadcasting Corporation.
- "Cutting bureaucracy won't hurt services: Rudd". News Online. Australian Broadcasting Corporation. 21 November 2007. Retrieved 28 November 2007.
- Morrison, Scott. "MEDIA RELEASE 05 Dec 2019 Prime Minister, Minister for the Public Service". Website of the Prime Minister of Australia. The Australian Government. Retrieved 10 December 2019.
- Robert French, 'Two Chapters about Judicial Power', speech given at the Peter Nygh Memorial Lecture, 15 October 2012, Hobart, p 3.
- R v Kirby; Ex parte Boilermakers' Society of Australia (1956) 94 CLR 254.
- Lange v Australian Broadcasting Corporation (1997) 189 CLR 520 at 563.
- "Australia Act 1986". Office of Legislative Drafting, Attorney-General's Department. Commonwealth of Australia.
- Federal Register of Legislation - Australian Broadcasting Corporation Act 1983 ''
- Federal Register of Legislation - Special Broadcasting Service Act 1991 ''
- Australian Government - Current Government Business Enterprises ''
|Library resources about |
Government of Australia
|
About This Chapter
Below is a sample breakdown of the Stoichiometry chapter into a 5-day school week. Based on the pace of your course, you may need to adapt the lesson plan to fit your needs.
|Day||Topics||Key Terms and Concepts Covered|
|Monday|| Stoichiometry: Calculating Relative Quantities in a Gas or Solution; |
Limiting Reactants and Calculating Excess Reactants
| Working with solution stoichiometry and stoichiometric calculations; |
Defining and identifying limiting reactants
|Tuesday|| Mole-to-Mole Ratios and Calculations of a Chemical Equation; |
Calculating Percent Composition and Determining Empirical Formulas
| Calculating units of measurement; |
Distinguishing between chemical and empirical formulas
|Wednesday|| Mass-to-Mass Stoichiometric Calculations; |
Hydrates: Determining the Chemical Formula from Empirical Data
| Making conversions of mass-to-mass and mass-to-moles; |
Writing the formulas of hydrates and anhydrates
|Thursday||Calculating Reaction Yield and Percentage Yield from a Limiting Reactant||Learning the differences between percent, theoretical and actual yield|
|Friday||Chemical Reactions and Balancing Chemical Equations||The process of writing and balancing formula and word equations|
1. Chemical Reactions and Balancing Chemical Equations
In this lesson, you'll learn how to balance a chemical reaction equation using the conservation of matter law. You'll also learn how to write both word and formula equations, what the subscripts after a letter mean and what the numbers in front of compounds mean.
2. Chemical Reactions Lesson Plan
Chemistry lessons often come from a textbook, but this lesson plan on dissolution and chemical reactions helps teachers illustrate these ideas in a fun, engaging way. This lesson is full of experiments using household items that demonstrate chemistry topics.
3. Balancing Chemical Equations Lesson Plan
This lesson plan will help students become proficient at balancing chemical equations. Students will watch a video lesson, discuss new information, create a step-by-step guide for reference, and practice balancing chemical equations.
4. Mole-to-Mole Ratios and Calculations of a Chemical Equation
Learn what a mole ratio is and how to determine and write the mole ratio relating two substances in a chemical equation in this video lesson. Also, learn to make mole-to-mole calculations and solve problems involving moles of substances.
5. Mass-to-Mass Stoichiometric Calculations
Learn how to set up and make mole to mass, mass to mole and mass to mass stoichiometric calculations. Learn how the ratios of moles helps you compare and make calculations. Learn how to relate mole ratios to molar mass.
6. Stoichiometry: Calculating Relative Quantities in a Gas or Solution
In this lesson, learn about molar volume and how to set up and make stoichiometric calculations with gases. Then learn about solution stoichiometry and how to make stoichiometric calculations with solutions.
7. Limiting Reactants & Calculating Excess Reactants
In this lesson, you'll learn what limiting reactant and excess reactant mean and how to determine which reactant is limiting in a chemical reaction when given the amount of each reactant. You'll also discover how to calculate the amount of product produced.
8. Calculating Reaction Yield and Percentage Yield from a Limiting Reactant
Learn what the theoretical yield, actual yield and percent yield are. Given the limiting reactant, learn how to calculate the theoretical reaction yield, which is also known as the ideal reaction yield and percentage yield.
9. Calculating Percent Composition and Determining Empirical Formulas
Learn the difference between the empirical formula and chemical formula. Learn how to calculate the percent composition of an element in a compound. Learn how, if given a percent composition, to determine the empirical formula for a compound.
10. Hydrates: Determining the Chemical Formula From Empirical Data
Learn the definition of a hydrate and an anhydrate in this lesson. Discover how, when given experimental data, you can determine the formula of a hydrate by following simple steps that include finding the moles of hydrate and anhydrate and comparing the two to write the formula.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the High School Chemistry Syllabus Resource & Lesson Plans course
- Introduction to Chemistry Lesson Plans
- Unit Conversion Lesson Plans
- Experimental Laboratory Chemistry Lesson Plans
- Properties of Matter Lesson Plans
- Atomic Structure Lesson Plans
- The Periodic Table Lesson Plans
- The Representative Elements of the Periodic Table Lesson Plans
- Nuclear Chemistry Lesson Plans
- Chemical Bonding Lesson Plans
- Phase Changes for Liquids & Solids Lesson Plans
- Gases in Chemistry Lesson Plans
- Solutions in Chemistry Lesson Plans
- Acids & Bases Lesson Plans
- Equilibrium Lesson Plans
- Chemistry Kinetics Lesson Plans
- Thermodynamics in Chemistry Lesson Plans
- Organic Chemistry Basics Lesson Plans
- Nucleic Acids Lesson Plans
- DNA Replication Lesson Plans
|
Read Time: 5 min 29 sec
Python is a high-level programming language that's simple to learn and broadly utilized for web advancement, data analysis, manufactured insights, and numerous other applications. Since of its straightforwardness, it is broadly utilized. This instructional exercise will talk about the essential sentence structure in Python and how to type in your to begin with program in Python
Basic Syntax in Python:
#This is a comment print("Hello, Let's begin your Journey for becoming a Data Scientist!")
mathlete = 5 #variable deepdiver = "Andrew Ng" #variable print(mathlete) print(deepdiver)
There are several built-in data types in Python, including:
mathmogul = 5 # integer numeralninja = 2.5 # float brainbooster = "AlmaBetter" # string boolean = True # boolean print(type(mathmogul)) print(type(numeralninja)) print(type(brainbooster)) print(type(boolean))
print(5 + 6) # addition print(4 - 3) # subtraction print(6 * 5) # multiplication print(6 / 5) # division print(7 % 5) # modulus print(5 ** 3) # exponentiation print(4 == 2) # equal to print(4 != 2) # not equal to print(6 > 4) # greater than print(5 < 2) # less than print(6 >= 6) # greater than or equal to print(3 <= 5) # less than or equal to print(3 > 2 and 8 < 10) # logical and print(4 > 6 or 9 < 10) # logical or print(not(3 > 4)) # logical not
Steps for Python Programming
👉 Step 1️⃣: Setting up Google Colab To get started, go to the Google Colab website at 🔗 https://colab.research.google.com/. You will need to sign in with your Google account. Once you are signed in, you can create a new notebook by clicking "New Notebook" or open an existing notebook from your Google Drive where you want to write Code.
👉Step 2️⃣: Writing your first Python program As soon as you click on the new notebook, your colab will appear, where you can see one code cell is already present. In the first cell of the notebook, type the following Code:
print("Hello, Future Data Scientist!") coursework=input("Write text which you want to show to the user in output")
This is a very simple program that just prints the phrase "Hello, Future Data Scientist!" to the console. Python has a built-in function called "print()" that allows you to output text to the console whereas “input()” allows to take input from console. These are two functions used for standard input and output in Python.
👉 Step 3️⃣: Running your program
To run your program, simply click on the "play" button located on the left side of the cell or press "Shift+Enter" on your keyboard. You should see the output "Hello, World!" printed to the console below the cell.
👉 Step 4️⃣:Saving your notebook
To save your notebook, click on "File" in the menu bar and then select "Save" or use the shortcut "Ctrl+S" on Windows or "Cmd+S" on Mac. You can also save your notebook to your Google Drive by clicking on "File" and then "Save a copy in Drive".
👉 Step 5️⃣: Sharing your notebook
If you want to share your notebook with others, you can click on "Share" located in the upper-right corner of the notebook. This will allow you to share your notebook via a link or add collaborators who can view or edit your notebook.
Writing your first Python program in Google Colab is a simple and easy process. With Google Colab, you can write, run and share your Python code without having to install any software on your computer. In Python we have few basic concepts like comments, variables,data types, input and output in python which are mostly used go write your first program to get started.
Answer: b: A high-level programming language
Answer: b: Notes for yourself or other developers that are ignored during program execution
Answer: a: Storing values
Answer: a: Numbers, strings, lists, tuples, sets, dictionaries, booleans
Answer: c: Used to perform operations on variables and values
Related Tutorials to watch
Top Articles toRead
|
So, this is the year when you have to switch over to the new Common Core Standards for high school geometry, but you don't quite know what that means. Or perhaps you know the new standards but haven't had time to re-align your course. I have your answer. I've created a set of PowerPoints and Word worksheets, aligned to the Common Core, that will take you from the first day of the course to the last.
This is Chapter 5 of my Common Core-aligned course in geometry. The topic is inequality. Students prove the triangle inequalities and apply those results. They also prove that the shortest path from a point to a line and between parallel lines is the length of the perpendicular which connects them. Below are descriptions of the chapters six sections.
5.1 Inequalities in One Triangle. The section begins with a review of the Triangle Exterior Angle Inequality. It is the fundamental inequality, the one used to prove all the rest. Students then prove both the Triangle Angle-Side Inequality (the greater side lies opposite the greater angle) and the Triangle Side-Angle Inequality (the greater angle lies opposite the greater side).
5.2 Applications. Students apply the three inequalities proven in the previous section - the Triangle Exterior Angle Inequality, the Triangle Side-Angle Inequality and the Triangle Angle-Side Inequality.
5.3 The Triangle Inequality. Students are guided through a proof of the Triangle Inequality. The proof is followed by a set of applications. Given three positive quantities, students are asked to determine whether they could represent the side length of a triangle. Students are given two positive quantities and are asked to find the range of possible values for the third side.
5.4 Distance. It is proven that the distance from point to line is the length of the perpendicular from one to the other. It is also proven that the distance between a pair of parallel lines is the length of the perpendicular between them. The section ends with a discussion of altitudes in a triangle. Three possibilities are distinguished: an altitude internal to the triangle, an altitude coincident with a side, and an altitude external to the triangle.
5.5 The Hinge Theorem and its Converse. Students draw upon the inequalities of 5.1 to prove the Hinge Theorem (or SAS Triangle Inequality) and its Converse (or SSS Triangle Inequality).
5.6 Applications. The chapter ends with a set of problems in which the Hinge and Converse Hinge are applied.
For each section there is both a PowerPoint and a worksheet. The worksheets give ample practice in the day's topic. Answers are included.
The worksheets are appropriate for both Honors and non-Honors classes. Questions marked H are intended for Honors only. Worksheets include answers to selected questions.
Included is a challenge problem set, appropriate for an Honors-level class. It is intended to be given out on the first day of the chapter and taken up on the last day.
The Preview is is a selection of PowerPoints and worksheets from the chapter.
A description of the course can be found among my downloads. The title is "Course Contents with Commentary".
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.