text
stringlengths
4
602k
by Loretta Hall When Europeans arrived on the North American continent, the Creek Indians occupied major portions of what are now the states of Alabama and Georgia. James Adair, a trader who dealt with the Creeks for three decades, described them in 1770 as the most powerful Indian nation known to the English. They were actually not so much a nation as a confederacy that welcomed new member tribes, even those of a different linguistic and cultural background. Those who joined blended their own traditions into the basic Creek governmental and social structure. In the early 1830s, the Creek population was about 22,000. Forced relocation to Indian Territory in what is now Oklahoma took a terrible toll, and by 1839 the population had decreased to 13,500. The Civil War further decimated the Creek people, reducing the number to 10,000 by 1867. In 1990 their population of 43,550 placed them tenth among Native American tribes. The Creek Indians called themselves Muscogees, or Muscogulges, names that in their language identified them as people living on land that was wet or prone to flooding. During the American colonial period, they received their modern name from English traders who noted that their towns always sat on the banks of picturesque creeks. Tribal legend traces Creek ancestry to the sky, where the ancestors lived in spirit form before descending to earth as physical beings. They originally lived in the West; their oral tradition tells of a journey toward the sunrise, crossing mountains so large they were called the "backbone of the world," traversing a wide, muddy river, and conquering their new homeland. Settling in the East, the ancestral Creeks separated into two groups. Settlements along the Coosa, Tallapoosa, and Alabama Rivers became known as the Upper Towns, while communities along the Flint, Chattahoochee, and Ocmulgee Rivers were called the Lower Towns. This partition was merely geographical at first, but as interaction with European colonists developed, the Lower Towns were more accessible to foreign influence. The Upper Towns tended to retain more traditional, political, and social characteristics. Annual spring floods provided the Creeks with favorable agricultural conditions. They cultivated a variety of crops and gathered wild fruits, roots, and herbs. Grass and the inner bark of trees provided material for making the shawls with which the women clothed themselves. The Creeks were also skillful hunters, depending on animals for both meat and clothing. Although the Creeks had contact with non-Indians as early as 1540 as a result of Spanish explorer Hernando De Soto's expedition, regular interactions did not begin until the late 1600s when the English moved into South Carolina and the Spanish settled in Florida. RELATIONS WITH NON-INDIANS "To other Indians the Creeks offered war or friendship with proud indifference," wrote Angie Debo in The Road to Disappearance (1940). "To the whites they showed a sturdy sense of equality and independence, tempered by a genuine appreciation of European goods." The Creeks were reputed to be a hospitable people skilled in diplomacy. They traded actively with all of the European colonies, though they generally preferred to deal with the English, who offered a greater variety and better quality of goods, as well as lower prices and better credit terms than the Spanish or the French. In fact, the Creeks allied themselves with the English in 1702, fighting in Queen Anne's War against the French and Spanish. In 1734 a Creek delegation led by Chief Tomochichi traveled to England to see King George II and sign a treaty. Intermarriage between Creek women and foreign trading partners was common. Creek wives acted as interpreters and taught their European husbands the language and customs of their people. Because they understood both the Indian and white cultures, many of the multiracial children of these marriages became tribal leaders as adults. One such Métis (mixed-blood) leader was Alexander McGillivray, the son of a Creek/French mother and a Scottish father. He became chief of the dominant Wind Clan in the late 1700s, and for two decades he worked to unify the Creek nation as an ally of the new United States of America. The Creeks had traditionally welcomed all non-Indians in a spirit of equality, but they did come to accept the concept of black slavery as an economic practicality. Because captured enemy Indians had sometimes become Creek slaves, the practice was not without precedent. European colonials encouraged the Creeks to think of blacks as slaves in order to prevent runaways from seeking refuge within Creek towns. Furthermore, expert Creek hunters were often paid to track and capture runaway slaves. Immediately after the Revolutionary War, the United States began trying to expand onto Indian homelands, and by 1840 virtually all of the Creeks were relocated to Indian Territory in what is now east-central Oklahoma. In an attempt to maintain their traditional identity in their new surroundings, they reestablished their former towns: the Upper Creeks settled along the Deep Fork, North Canadian, and Canadian Rivers, while the Lower Creeks located their towns farther to the north along the Arkansas and Verdigris Rivers. The city of Tulsa evolved from a Creek relocation settlement built on sacred ashes brought from the old eastern town of Talsi. In addition to job availability and training issues that confront all Americans, Creeks face the problem of tribal economic independence and the struggle to retain their cultural identity. The Muscogee (Creek) Nation of Oklahoma actively seeks to assume and assert the rights and responsibilities of a sovereign nation through the retention of existing tribal lands, acquisition of additional land, and improved access to significant places outside tribal lands. Acculturation and Assimilation Rebuilding their towns in Oklahoma meant much more to the Creeks than simply erecting buildings. The full meaning of the word idalwa is diluted when the English word "town" is substituted. An idalwa had the autonomy of a Greek city-state and was the primary cultural unit of Creek society. Each town had its own traditions and its own versions of ceremonies, and the Creeks drew more of their identity from the town than from familial relationships. A child was considered a member of the town of his or her mother. The town square, the heart of the Creek community, was used for warm-weather council meetings, dances, and rituals. The square was an open space defined by four rectangular structures, each with one open side that faced the square. A ceremonial fire was kept burning in the center of the open space. Adjacent to the square were two other important facilities: the chokofa, or rotunda, and the chunkey yard. The chokofa was a circular structure about 40 feet in diameter that served as a meeting place for the town council during the winter. It was also used for social gatherings where the entire town could enjoy singing and dancing during inclement weather. The chunkey yard was a field two to three hundred yards long that was recessed into the ground so that spectators could sit on the surrounding banks. On it was played a ball game that resembled lacrosse. The game was an important part of Creek culture, offering recreation during games, either among the town members or against a team from a friendly town. Known as "brother to war," it also provided a forum for settling disputes between unfriendly or enemy towns. In addition to the partitioning of the nation into Upper and Lower communities, the confederacy's fifty towns were divided into two categories, based on descent. Each group is known as a moiety. Red, or War, towns took the lead in declaring and conducting war operations; councils addressing topics of diplomacy and foreign relations would meet in one of these towns. White, or Peace, towns were cities of refuge; councils seeking to establish peace or enact laws governing internal affairs of the Creek nation met in these towns. The moiety of each town was easily identifiable, as its color was painted on buildings and ceremonial articles, and was used as body decoration by its people. There was an atmosphere of camaraderie among towns of the same moiety, and definite rivalry between towns of opposite natures. TRANSFORMATION OF CULTURE The Creeks were one of the so-called Five Civilized Tribes, along with the Seminoles (who were actually affiliated with the Creek Confederacy until they formed a separate government in 1856), Cherokees, Chickasaws, and Choctaws. The title derived from the fact that these tribes began to assimilate European ways from the earliest phases of contact. The Creeks eagerly traded deerskins for brightly colored cotton cloth. They used their hunting skills to obtain metal tools. Included among these tools were guns, which transformed their methods of hunting, making them increasingly reliant on continual trading. While the acquisition of new goods improved their lifestyle, it also eroded their traditional self-sufficiency. The Creeks voluntarily modified their way of life in response to interaction with white traders, but the American government went one step further, undertaking an official effort to assimilate them completely into white culture. An early phase of this process involved the 1796 appointment of Benjamin Hawkins as principal agent to the tribe. Hawkins believed that the Creek people would benefit from being taught and equipped to adopt white culture. He devoted the last twenty years of his life to this effort, encouraging the women to become skilled at making cotton cloth and the men to adopt modern farming techniques. White Americans in the eighteenth century had little appreciation for Indian cultures, assuming that the Indians would prefer white culture if they could be induced to learn about it. Hawkins was uncomfortable with the idea that the Indians might not want to abandon their own traditions to embrace the white way of life. In fact, some Creeks did want to keep their culture intact, but others thought it would be better for them to adopt the culture of the European settlers. In a March 1992 Progressive essay Creek author Joy Harjo recalled her great-grandfather, Marsie Harjo, a Creek Baptist minister: "He represents a counterforce to traditional Muscogee culture and embodies a side of the split in our tribe since Christianity, since the people were influenced by the values of European culture. The dividing lines are the same several hundred years later." Like other Native American groups, the Creeks still encounter a mainstream culture that generally lacks understanding and appreciation for their values. For example, Creeks traditionally shared their possessions readily and relied mainly on current food supplies. These basic inclinations conflict with prevailing American values of acquisition and saving for the future. Such differences in values can cause difficulties when Indians attend white schools. In 1988, Native American students exhibited a dropout rate of 35.5 percent, and they are significantly overrepresented in special education programs. Among teenagers, Indians have the highest suicide rate of any minority group. The Indians' attitude toward land ownership was another cultural difference that profoundly affected federal acculturation efforts. The Creeks viewed land as belonging to the community; the Dawes Act of 1887 stripped the tribe of all common land and apportioned it to individuals for private ownership. As Harjo wrote in the March 1992 issue of the Progressive, "This act undermined one of the principles that had always kept the people together." With continued attacks on their lifestyle, many Creeks found ways to adapt their traditional ways into the new societal context. Christian missionaries had worked among them since 1735, and by the time the tribe moved to Oklahoma, many Creeks belonged to Baptist, Methodist, or Presbyterian churches. Under governmental pressure to abandon the tribal town structure, they simply shifted their community's center from the square to the church. Each congregation chose from among its members a preacher who would serve for life; it was a natural substitution for the town micco, or chief. MISCONCEPTIONS AND STEREOTYPES "From the minds of the earliest English colonists . . . who because of their own reverence for the institution of private property expected violent opposition to their intrusion, came the image of the Indian as an uncooperative, hostile, savage, treacherous, murderous creature," wrote historian Florette Henri in The Southern Indians and Benjamin Hawkins: 1796-1816; "and the Indians' disinclination to destroy the handful of colonists, but rather to shelter, feed, and aid them, was interpreted as proof of their guile." The Creeks have been victims of the general prejudices that are directed at all Native Americans. In contrast, however, to the stereotype of the reserved, stoic Indian, Creeks respected impassioned public speakers, and lengthy oration was common at council meetings. The Creeks' introduction to liquor caused both real and perceived damage to their society. Traditionally, they drank only water, even at feasts. The drinks they did concoct that may have had an intoxicating effect were generally used only at rare ceremonial rites. Having no tradition of social sanctions against drunkenness, many Indians imbibed freely. The whites with whom they interacted also tended to get drunk. Henri discussed the different perceptions of this activity: "As a rule, it was Indian drinking that was stressed, and when both white and Indian drinking were mentioned, different terms were used for them. When Indians drank excessively, they were said to become noisy, rude, insolent, and violent; but when the garrison got drunk, gouging eyes and biting noses, Price [Hawkins' friend who managed a government trading post in Georgia] characterized the brawl as a 'drunken frolic'." In 1937 University of Oklahoma professor Morris E. Opler wrote in an unpublished report that many people found it incongruous that Indians who belonged to one of the Five "Civilized" Tribes would want to retain any of their old ways. He further observed that "So far as the whites of Creek country (in Oklahoma) are concerned they have no intention of accepting the Creeks into the main stream of their social and political life." For their part, the Creeks kept to themselves, interacting with the whites only as necessary for trade. Corn was the staple food of the Creeks. Two yearly crops of early corn were eaten as they ripened, and a harvest of late corn was dried and stored for winter use as hominy. Each family compound contained a large wooden mortar and pestle used to process corn into meal or grits after it had been hulled by cooking with lye or mixing with ashes. The cornmeal was then cooked with lye and water, and the gruel was left to sour for two or three days. The resulting soup was called sofkey, and it was such a basic part of the diet that each household kept a bowlful at the door so visitors could partake as they entered. Corn was used in other ways as well. Burned shells of the field pea were mixed with cornmeal to make blue dumplings. Apuske, a drink, was made by sweetening a mixture of parched cornmeal and water. Sweet potatoes, pumpkins, peaches, and apples were eaten fresh or dried for storage. The Creeks also commonly ate vegetable stews, either with or without meat. After relocation to Oklahoma, salt was available from a natural creek-side deposit. Hickory nuts were used both as a cooking ingredient and as a source of oil. Bear fat was prized as a seasoning. Creek diets included deer, wild hog, turkey, and smaller game such as opossum and squirrel. Beef, venison, and bison meat could be smoked for storage or cut into strips and dried. Meat and fish might be cooked by boiling or roasting. They employed several methods for catching fish, including nets, traps, and spears. During the summer, the population of an entire town gathered at a favorable spot where a stream could be dammed or fenced to trap fish. Appropriate roots were prepared and thrown into the water to drug the fish; as they floated to the surface, the men showed their marksmanship by shooting them with bows and arrows. The women then cooked sofkey and fried the fish for a feast. Traditional clothing for men consisted of a breech-cloth, deerskin leggings, a shirt, and, in winter, moccasins. Women wore shawls and deerskin skirts. Children generally went unclothed until puberty. During the winter, additional warmth was provided by bear skins and buffalo hides. Both men and women wore their hair long. The men plucked their facial hair and also removed hair around their heads, leaving a long central lock that they braided with decorative feathers, shells, and strings. Sometimes they made turbans from strips of deerskin or cloth. The women, whose hair might reach to their calves, wound it about their heads, fastening it with silver jewelry and adorning it with colorful streamers. The men used extensive tattooing to decorate their trunks, arms, and legs. The indigo designs included natural objects, animals, abstract scroll-work, and even hunting and battle scenes. Both men and women employed body paint and wore earrings and other jewelry. Trade with Europeans brought colorful woven fabrics to the Creek people. They quickly incorporated these into their customary fashions, and began to decorate clothing and moccasins with trade beads. The women liked to wear clothing fashioned from calico and other printed cloth, and silk ribbons became popular hair ornaments. Creek women also bought the scrap threads of scarlet cloth that traders cleaned out of the bottoms of their packs; they boiled them to remove the dye, which they then added to berry juice and used to color other cloth. THE GREEN CORN FESTIVAL The major annual holiday was the Green Corn Festival, which celebrated the beginning of the corn harvest in late July or early August. Depending on the size of the town, the festival lasted from four to eight days. It involved a number of traditions, including dancing and moral lectures given by town leaders. To prepare for the festival, the entire town was cleaned, and the square refurbished with fresh sand and new mats for its buildings. Women made new clothing for their families, as well as new pottery and other household furnishings. The town piled old clothing and furnishings together with the collected rubbish and burned them, along with all remaining food supplies that had been stored from the previous year. All fires in the town were extinguished, and a new fire was started in the town square by the ancient method of rubbing sticks together. Each family carried some of this new fire home to relight their household fire. The festival was also called the busk, especially among whites. The name derived from the Creek word boosketah, meaning a fast. The men cleansed themselves with ceremonial bathing and by fasting and drinking a strong emetic potion which they called "medicine." The beverage, which Europeans called the "black drink," was also used on other occasions, but it was a central element of the Green Corn Festival. As time passed, women were allowed to join in the festival dancing; by the late 1800s they occasionally partook of the "medicine." At the end of the festival, when spiritual appreciation had been given for the new crop, the people joined in a feast. Inspired by the ripening of the new corn, the festival was a time of renewal and forgiveness. Drinking the "medicine" purged the body physically and purified it from sin. A general amnesty was conferred for all offenses committed in the past year, with the exception of murder. If a guilty person was able to hide between the time a crime was committed and the time of the Green Corn Festival, he or she would escape punishment entirely. The festival marked the beginning of the new year and as such became the official date for such events as marriages, divorces, and periods of mourning. It was also the occasion for young men's initiation rites. According to traditional beliefs, illness was the result of an animal spirit or a conjurer placing some foreign substance in the victim's body. An owala, or shaman, would affect a cure by concocting an appropriate medicine out of roots, herbs, and other natural substances. While brewing the potion, he would sing appropriate songs and blow into the mixture through a tube. The afflicted person would take the medicine internally and also apply it externally. After establishing contact with the Europeans, the Creeks were affected by periodic outbreaks of smallpox, measles, and other imported diseases; the number of fatalities went undocumented. During removal to Indian Territory, emigrating Creeks were subjected to difficult traveling conditions including exposure to weather extremes. Overcrowded conditions on boats during waterborne portions of the journey, coupled with dietary changes and unclean drinking water from the Mississippi River, left the travelers vulnerable to illness. Maladies such as dysentery, diarrhea, and cholera contributed to the many casualties en route. Health problems did not end with arrival in Indian Territory. Streams behaved differently in the West than they did in the East; unexpected flooding destroyed new homes and crops, while in dry spells the streams turned into breeding grounds for mosquitos, and many Creeks fell victim to malaria. During western winters, periods of mild days alternated with sudden bouts of extremely cold weather; Creek shelters and clothing were inadequate for this climate, and many people perished from pneumonia. During the first year in Indian Territory, 3,500 Creeks died of disease or starvation. Even in 1990s, health care furnished through the Indian Health Service often has been inadequate. The Muscogee Nation manages its own hospital to better serve its people. Creeks experience a relatively high incidence of diabetes, which may be related to the poor economic conditions they have endured in modern times; alcoholism may also play a role. Most Creeks spoke dialects of the Muskogean language. In Deerskins & Duffels: The Creek Indian Trade with Anglo-America, 1685-1815, Kathryn E. Braund has asserted that "it was still the English who were forced to learn the melodious Muskogee tongue, for few Creeks expressed any willingness to adopt the harsh and strident tones of their new friends." Creeks who avoided relocation to Oklahoma tended to stop speaking the Muskogean language so they would not be recognized as Indians and therefore forced to leave their homes. In 1910, 72 percent of Creeks over the age of ten could speak English. By 1980, 99 percent of Creek adults could speak English well; 15 percent of them still spoke their native language at home. With the help of a Creek student named James Perryman, Presbyterian minister John Fleming created a phonetic alphabet for Muskogee. In 1835 they published a book of hymns and a primer called I stutsi in Naktsokv (The Child's Book). Another missionary published a Creek dictionary and grammar book in 1890. The language's vowels and their sounds are: "v" (as the vowel sound in but), "a" (as in sod), "e" (as in tin), "o" (as in toad), "u" (as in put), and "i" (as in hate). Most consonants are pronounced as in English, except that "c" sounds like "ts" or "ch," while "r" sounds like "hl" (made by blowing while pronouncing an "l"). GREETINGS AND OTHER POPULAR EXPRESSIONS Some of the basic words of the Creek language are Hes'ci ("hihs-jay")—hello; henk'a ("hihn gah")— yes; hek'us ("hihg oos")—no; Mvto' ("muh doh")— thank you. Family and Community Dynamics CLAN AND FAMILY STRUCTURE Creek society was based on a clan system, with each person's identity determined by the clan of his or her mother. Clan membership governed social interactions, ranging from whom members could joke with to whom they could marry (marriage within one's clan was considered incest). Each town included members from about six clans. The family home was actually a collection of several rectangular buildings constructed of a framework of wood poles, with walls of mud and straw plaster, and a roof of cyprus bark shingles. These buildings were arranged in a smaller version of the town square, with a courtyard in the center. One building was used for cooking and eating, one for sleeping in winter (sleeping and eating were done outdoors in warm weather), and one for storing food supplies. Another building was provided for women's retreats, used during menstruation as well as for a four-month period at childbirth. Each homesite included a small garden plot where the women of the family raised some vegetables and tobacco. The town maintained a large field of fertile land for farming, with a section reserved for each family. The townspeople worked together on the entire field, and at harvest time each family gathered the produce from its section. All were expected to contribute to a communal stockpile that would be used to feed visitors and needy families in the town. Traditionally, Creeks buried the dead under the earthen floor of the home, though by the late 1800s it was more common to bury them in the churchyard or in a family cemetery near the home. A widower was expected to mourn his dead wife for four months, during which time he would not bathe, wash his clothes, or comb his hair. The same mourning practices were required of a widow; she, however, was obligated to mourn for four years. The period of mourning for a widow could be decreased by the dead husband's clan if they so chose. Often, after the mourning period, the widow would marry a brother of her deceased husband. Although marriages could be arranged by clan leaders, they were usually initiated by the prospective husband, who solicited the permission of the woman's family. During courtship, the man might woo the woman by playing plaintive melodies on a flute made either of hardwood or a reed. Sexual activity before marriage was allowed, and it was not unusual for travelers to hire Creek women as bed companions. Once a marriage became final, however, adultery was not tolerated. Punishment was harsh, including severe beatings and cutting off the hair, ears, and sometimes noses of both offenders. A woman committing adultery was rejected by her husband and children, but she could marry her lover. When a couple married, the husband went to live with his wife in the home of her parents. The marriage was finalized only after the husband had built his wife a home and proven his ability to support her by planting and harvesting a crop and successfully hunting game. During the trial period of the marriage, the couple could decide to separate, and infidelity would not be punished. With the permission of his wife, a man could take a second wife, for whom he provided a separate home. Divorce was allowed but rarely occurred in families with children; when it did, the woman retained the children and the family possessions. The father fasted for four days after the birth of his child, and he maintained an interest in his family. Raising the child, however, was primarily the responsibility of the mother and the leader of her clan. Babies spent their first year secured to cradle boards; boys were wrapped in cougar skins, while girls were covered with deerskins or bison hides. A daughter was called by a kinship term or named after some object or natural occurrence associated with her birth. A son was called by the name of his totem, such as bird or snake; as he grew, he might be given a nickname based on some personality trait. At the age of puberty, a boy was initiated into adulthood in his town and was given an actual name. His first name, which served as a surname, was that of his town or clan, while his second, or personal, name was descriptive of something about him. Creek girls learned from their mothers and maternal aunts the skills they would need as adults. Boys were instructed primarily by their maternal uncles, though they also felt their father's influence. Christian missionary schools established in 1822 were the first to formally educate Creeks in American culture; a few earlier attempts at founding schools had been unsuccessful. By the late twentieth century, Creek students generally attended public schools, with a few attending boarding schools. The 1980 census found that 65 percent of Creek adults were high school graduates and 11 percent were college graduates. A branch of Oklahoma State University at Okmulgee serves the Creek community in Oklahoma. The Poarch Creek Tribe in Alabama has an education department and offers on-the-job training through a Job Training Partnership Act (JTPA) program. The traditional Creek religion revered Esaugetu Emissee (Master of Breath) as the supreme being. He was believed to live in an upper realm that had the sky as its floor. The sun, moon, and planets were seen as messengers to this deity. The Creeks also worshiped animal spirits. The Green Corn Festival was the principal religious celebration. Although many Creek myths have been lost to history, some were documented by Frank G. Speck in 1904 and 1905. He reported that the myths told of animal spirits in the sky world who were responsible for the earth's origin. Master of Breath then placed his own innovations on creation, making the earth as it is now. Speck wrote in Memoirs of the American Anthropological Association: "The Creeks assert that they were made from the red earth of the old Creek nation. The whites were made from the foam of the sea. That is why they think the Indian is firm, and the white man is restless and fickle." Each Creek town kept certain sacred objects. The most famous were copper and brass plates held by the town of Tuckabatchee. The five copper plates were oblong, with the largest being about 18 inches by seven inches. The two brass plates were circular, the larger being 18 inches in diameter, and one was stamped with the mark Æ. Although one legend indicated that the objects had been given them by the Shawnee, who may have obtained them from the Spanish, the plates were widely believed to have been bestowed on the Creeks by the Master of Breath. Contact with European cultures brought a succession of missionaries to the Creek people. Gradually, many of the people began to espouse Christianity. They continued to observe the Green Corn Festival, although those who had become Baptist or Methodist no longer participated in ceremonial dancing. With this decrease in participation, the festival began to lose its former significance, and it deteriorated into little more than a wild party. Christianity became dominant among the Creeks after the removal to Oklahoma. Although some missionaries continued to work among them, most Creek churches were led by preachers who emerged from within the community. As Debo described: "The Creeks had found in Christianity a means of expressing the strong community ties, the moral aspiration, the mystic communion with nature, the deep sense of reverence that had once been expressed by the native ceremonials." Employment and Economic Traditions The early Creeks enjoyed a comfortable living based on agriculture and hunting. Their homeland was fertile and game was plentiful. With the emergence of European contacts, the Creek hunting industry changed from a subsistence operation to a commercial enterprise. Trade expanded, and they began to sell not only venison, hides, and furs, but also honey, beeswax, hickory nut oil, and other produce. They also found markets for manufactured goods including baskets, pottery, and decorated deerskins. As white settlers continued to move into Creek territory, the Indians were crowded into progressively smaller land areas. This process began in 1733 when a cession of two million acres of Creek land was given to the new colony of Georgia so it could be sold to satisfy debts to British traders. In order to attract additional colonists, the land was sold at bargain prices. An extensive series of other land cessions followed, and eventually the Creek economy collapsed. According to Indians of the Lower South: Past and Present, in 1833 Lieutenant Colonel John Abert wrote to the United States Secretary of War that during the last three years the Creek people had gone "from a general state of comparative plenty to that of unqualified wretchedness and want." The Removal Treaty of 1832 gave land to Creeks who chose to emigrate to Indian Territory in exchange for tribal lands in Alabama. To encourage the Indians to move westward, the government also promised a variety of benefits, including a cash payment of $210,000—to be distributed according to tribal laws over a fifteen year period—two blacksmith shops in the new territory, an educational annuity, and another cash payment of $100,000 to help the Creeks settle their debts and ease their economic hardship. In addition, each warrior would receive a rifle, ammunition, and a blanket; families' expenses would be paid during the migration and throughout the first year in the West. Some full-blooded Creeks still farm land in the area of Oklahoma that was settled by the Upper Creeks. The Muscogee Nation operates a bingo hall and stores that sell tobacco products. Broadening their economic development efforts is a high priority for the tribe. Many of the mixed-blood Creeks live in Tulsa, Eufaula, or other Oklahoma cities, working in a variety of occupations. Census data from 1980 indicates that about two-thirds of the Creek Indians were living in urban settings at that time. At the time of Indian removal, a segment of the Creek people entered into an agreement with the government that enabled them to remain in the East. They were business people who operated ferries, served as guides and interpreters, and raised cattle. Their descendants are the Poarch Creeks, whose tribal headquarters are located in Atmore, Alabama. During the early 1900s, some Poarch Creeks began to work in the timber and turpentine industries. Some also became tenant farmers or worked as hired farm laborers. Beginning in the 1930s, the pulpwood industry became an important element in the Poarch Creek economy. Since the 1950s, Poarch Creeks have been working in other non-agricultural jobs. According to 1980 statistics, 61 percent of Creeks over the age of 16 were in the labor force. Of those who were employed, 19 percent were in managerial or professional specialty occupations, and 26 percent were in technical, sales, and administrative support occupations. Looking at major industry groups, approximately six percent worked in the agricultural, forestry, fisheries, and mining areas; nine percent worked in public administration; 12 percent worked in retail trade; 19 percent were involved in manufacturing; and 22 percent worked in professional and related services, including health and education. Politics and Government Throughout their history, the Creeks governed themselves democratically. Each town elected a chief who served for life, though he could be recalled. Members of each town were informed about issues and participated actively in decision making. Town leaders met in daily council sessions, and when broader councils were called, each town sent several representatives to speak and vote on its behalf. Although there was no specific law fixing a penalty for misrepresenting constituents, leaders who did so faced severe consequences; for example, after signing a 1783 treaty that ceded good hunting grounds to Georgia, a chief returned home to find his house burned and his crops destroyed. The society was matrilineal, but most positions of tribal leadership were filled by men. While women did not vote, they did enjoy full economic rights including property ownership, and they exerted significant influence on decisions by discussing their opinions with the men of the town. Each town may also have appointed a Beloved Woman who communicated with her counterparts in other towns. The roles of the Beloved Woman and perhaps other female leaders have been lost to history since European observers ignored them and omitted them from written accounts. The Creeks supported the British in the American Revolutionary War. In 1790, a delegation of Creek leaders traveled to New York to negotiate a treaty with President Washington. It was the first in a long series of treaties that ceded tribal land to the United States; with each cession, the tribe was guaranteed unending ownership of their remaining land. In some cases, treaties were obtained by such fraudulent means as purposely negotiating with a non-representative group of minor chiefs after being refused by the official delegation, or forging the names of chiefs who refused to cooperate. In 1812 the Shawnee chief Tecumseh, whose mother was Creek, organized a rebellion against the United States. The Creek nation split over whether to join the uprising; most of the Lower Creeks supported Tecumseh while the Upper Creeks were rather evenly divided in their allegiance. This division resulted in the Red Stick War, a devastating civil war within the tribe. Under terms of the peace treaty signed in 1814, the tribe relinquished to the United States 22 million acres of land, including the townsites of some of the Upper Creeks who had fought alongside Andrew Jackson's forces against the rebels. In addition to gradually obtaining ownership of tens of millions of acres of Creek land, federal and state governments placed a succession of restrictions on the Indians. Alabama law, for example, prohibited an Indian from testifying against a white man. According to Grant Foreman in Indian Removal: The Emigration of the Five Civilized Tribes of Indians (1932), a Creek delegation to the United States Secretary of War in 1831 complained, "We are made subject to laws we have no means of comprehending; we never know when we are doing right." The Removal Treaty of 1832 guaranteed the Creeks political autonomy and perpetual ownership of new homelands in Indian Territory in return for their cession of remaining tribal lands in the East. It specified that each Creek could freely choose whether to remain on his homeland or move to the West. Those who decided to stay in the East could select homesteads on former tribal land. Land speculators eager to profit from the anticipated influx of white settlers devised a variety of ways to cheat the Indians out of their land, either by paying far less than its true value or by forging deeds. After an Indian attack on a mail stage—for which a white man was later convicted—a brief civil war pitted Creeks who wanted to remain in the East against those who accepted the concept of relocation. Finally the federal government ordered forcible removal of all remaining Creeks in 1836. Emigrants were subjected to horrible conditions during the government-subsidized trips to Indian Territory. One group began their journey in December 1834, barefoot and scantily clothed; 26 percent of them died during the four-month journey. Leaders pushed onward as quickly as they could, not allowing the Indians to conduct funeral services to ensure the dead an afterlife, and sometimes not even allowing the survivors to bury the dead. In July 1836, a party of 1,600 Creeks departed for the West with the warriors handcuffed and chained together for the entire journey. Upon arrival in Indian Territory, the Five Civilized Tribes faced opposition from plains Indians who would have to share diminished hunting grounds with 60,000 new residents. Although the Creeks were capable of defending themselves against attack, they took the lead in conducting negotiations between the immigrant tribes and the indigenous people to establish peaceful coexistence. GOVERNMENT AFTER RELOCATION As they settled into their new homeland, the Creeks discovered that the United States' promises of assistance went largely unfulfilled. Tools and farm implements did not come in time to build homes and plant crops. Weapons and ammunition did not arrive, so the men had to relearn bow and arrow hunting techniques. In order to maximize profits from their government contracts, food suppliers delivered partial shipments and rancid provisions. Especially during the first few years after relocation, annuity payments guaranteed by the treaty were made primarily in goods rather than in cash, and most of the items to be delivered were either useless to the Indians or were lost in shipment. By the 1850s the Creek people had begun to achieve a relatively prosperous life in their new territory. Then the Confederate States of America seceded from the United States. The Creeks tried to remain neutral in the conflict but were drawn into hostilities by attacks on their people. Loyalties were once again divided. The Lower Towns generally favored retention of slavery and sided with the South, while the Upper Towns chose to abide by their treaties with the North. What ensued was another civil war within the Creek nation. In retribution for the failure of the entire tribe to support the Union, the post-war treaty required the cession of 3.2 million acres, or about half of the Creek land in Indian Territory. GOVERNMENT DISSOLUTION AND LAND ALLOTMENT The Creeks attempted to formalize their government after arriving in the West. Opothle Yahola, a chief who led Creeks loyal to the United States during the Red Stick War and the Civil War, oversaw an effort to record Creek law into written form. A written constitution providing for elected tribal officers was adopted about 1859; after the Civil War, it was replaced with a new one modeled closely after the U.S. Constitution. Acting on recommendations of the Dawes Commission, Congress passed the Curtis Act in 1898. As a result, tribal lands were removed from common ownership and distributed among individual Indians for private ownership. In 1906, the U.S. government declared the Creek tribal government dissolved. These federal policies were reversed by the 1934 Wheeler-Howard Act, which encouraged tribal cultural and economic development. Two years later, Congress passed the Oklahoma Indian Welfare Act, providing Indian tribes with a mechanism for incorporating. It also provided benefits such as a student loan program and a revolving fund to be used for extending credit to Indians. The 37,000 members of the Muscogee Nation are governed by an elected principal chief, a bicameral legislature, and a judicial branch. The 2,000 Poarch Creeks in Alabama are governed by an elected tribal council that selects a tribal chairman from among its nine members. Individual and Group Contributions Listed below are some of the Creek people who have made notable contributions to American society as a whole. It is difficult to arrange their names by area of contribution, since some individuals attained prominence in several fields. Edwin Stanton Moore attended Chilocco Indian School and Oklahoma A & M College, where he played football from 1938 to 1940; he was awarded the Department of the Interior Meritorious Service Medal upon retirement as the Director of Indian Education in 1979. FILM AND BROADCAST MEDIA Will Sampson (1934-1987) was an actor who appeared in several motion pictures, including The Outlaw Josie Wales and One Flew Over the Cuckoo's Nest, which won an Academy Award for Best Picture in 1976. Gary Fife is the producer and host of "National Native News," which airs on over 160 public radio stations around the country. GOVERNMENT AND PUBLIC INVOLVEMENT Enoch Kelly Haney is an Oklahoma state senator who is nationally recognized for his political involvement and proactive stance for Native American rights; he is also an accomplished artist on canvas and in bronze. Gale Thrower (1943– ) received the Alabama Folk Life Heritage Award for her contributions toward preserving her tribe's traditions and culture. Alexander (Alex) Lawrence Posey (1873-1908) was a poet and a writer of prose; he was elected to the House of Warriors, the lower chamber of the Creek National Council; at various times he served as superintendent of two boarding schools and the Creek Orphan Asylum, and as superintendent of public instruction for the Creek Nation of Oklahoma; he helped draft the constitution for the proposed State of Sequoia, a document on which the constitution for the state of Oklahoma was later based. Louis (Littlecorn) Oliver (1904-1990) wrote Chasers of the Sun: Creek Indian Thoughts, and two books of poetry: The Horned Snake and Caught in a Willow Net. Joy Harjo (1951– ), winner of the Academy of American Poetry Award, has published several books of poetry, including A Map to the Next World (2000). Ernest Childers (1918– ) was awarded the Congressional Medal of Honor for "exceptional leadership, initiative, calmness under fire, and conspicuous gallantry" on September 22, 1943, at Oliveto, Italy. John N. Reese was awarded the Congressional Medal of Honor for "his gallant determination in the face of tremendous odds, aggressive fighting spirit, and extreme heroism at the cost of his life" on February 9, 1945, at Manila in the Philippine Islands. Allie P. Reynolds was a baseball pitcher with the Cleveland Indians from 1942 to 1946 and the New York Yankees from 1947 to 1954; he had the best earned run average (ERA) in the American League in 1952 and 1954, and he led the league in strikeouts and shutouts for two seasons; he was named America's Professional Athlete of the Year in 1951. Jack Jacobs played football for the University of Oklahoma from 1939 to 1942; he also played professional football for 14 years with several teams including the Cleveland Rams, the Washington Redskins, and the Green Bay Packers. Acee Blue Eagle (1908-1959) was an acclaimed Creek painter. Fred Beaver (1911-1980) and Solomon McCombs (1913-1980) were painters who served with the U.S. Department of State as goodwill ambassadors, using their art as a means of bridging the communications gap around the world. Jerome Tiger (1941-1967), a painter and sculptor, was also a Golden Gloves boxer. His brother Johnny Tiger, Jr., is a master artist at the Five Civilized Tribes Museum. Joan Hill is a Creek/Cherokee painter who has received numerous recognition awards, grants, and fellowships in the art world. She has done a series of paintings depicting the various treaties of the Five Civilized Tribes, and another portraying the women of the tribes. Muscogee Nation News. The official publication of the Muscogee Nation. Distributed 12 times annually in English. Circulation is 8,100. Contact: Jim Wolfe. Address: Department of Communications, P.O. Box 580, Okmulgee, Oklahoma 74447. Telephone: (918) 756-8700 extension 327. Poarch Creek News. A monthly English-language publication of the Creek tribe in Alabama. Contact: Daniel McGee. Address: HCR 69, Box 85-B, Atmore, Alabama 36502. Telephone: (205) 368-9136. Operated by the Poarch Creek Tribe. Programming is in English and features country music, local news, and community events. Contact: Nathan Martin. Address: 1318 South Main Street, Atmore, Alabama 36502-2899. Telephone: (205) 368-2511. Organizations and Associations Muscogee (Creek) Nation. Contact: Principal Chief Bill Fife. Address: Tribal Offices, P.O. Box 580, Okmulgee, Oklahoma 74447. Telephone: (918) 756-8700. Poarch Creek Indians. Contact: Tribal Chairman Eddie Tullis. Address: Tribal Office, HCR 69, Box 85-B, Atmore, Alabama 36502. Telephone: (205) 368-9136. Museums and Research Centers Creek Council House Museum. A museum and library of tribal history. Contact: Debbie Martin. Address: P.O. Box 580, Okmulgee, Oklahoma 74447. Telephone: (918) 756-2324. Calvin McGee Library. A cultural center and library for the eastern Creeks. Contact: Gale Thrower. Address: HCR 69, Box 85-B, Atmore, Alabama 36502. Telephone: (205) 368-9136. Five Civilized Tribes Museum. Displays Indian artifacts and art work, with separate sections devoted to each of the Five Civilized Tribes. Contact: Lynn Thornley. Address: Agency Hill on Honors Heights Drive, Muskogee, Oklahoma 74401. Telephone: (918) 683-1701. Sources for Additional Study Braund, Kathryn E. Holland. Deerskins & Duffels: The Creek Indian Trade with Anglo-America, 1685-1815. Lincoln: University of Nebraska Press, 1993. Corkran, David H. The Creek Frontier: 1540-1783. Norman: University of Oklahoma Press, 1967. Green, Donald Edward. The Creek People. Phoenix: Indian Tribal Series, 1973. Green, Michael D. The Creeks. New York: Chelsea House, 1990. ——. The Politics of Indian Removal. Lincoln: University of Nebraska Press, 1982. Harjo, Joy. "Family Album," Progressive, March 1992; pp. 22-25. Henri, Florette. The Southern Indians and Benjamin Hawkins: 1796-1816. Norman: University of Oklahoma Press, 1986. Indians of the Lower South: Past and Present, edited by John K. Mahon. Pensacola, Florida: Gulf Coast History and Humanities Conference, 1975. Saunt, Claudio. A New Order of Things: Property, Power, and the Transformation of the Creek Indians, 1733-1816. New York : Cambridge University Press, 1999. Swanton, John Reed. Early History of the Creek Indians and Their Neighbors. Gainesville: University Press of Florida, 1998. Winn, William W. The Old Beloved Path: Daily Life Among the Indians of the Chattahoochee River Valley. Columbus, Georgia: Columbus Museum, 1992. Hall, Loretta. "Creeks." Gale Encyclopedia of Multicultural America. 2000. Encyclopedia.com. (August 24, 2016). http://www.encyclopedia.com/doc/1G2-3405800046.html Hall, Loretta. "Creeks." Gale Encyclopedia of Multicultural America. 2000. Retrieved August 24, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3405800046.html
Astronomers Put Forward New Theory on Size of Black Holes Astronomers have put forward a new theory about why black holes become so hugely massive - claiming some of them have no 'table manners', and tip their 'food' directly into their mouths, eating more than one course simultaneously. Researchers from the UK and Australia investigated how some black holes grow so fast that they are billions of times heavier than the sun. The team from the University of Leicester (UK) and Monash University in Australia sought to establish how black holes got so big so fast. Their research is due to published in the Monthly Notices of the Royal Astronomical Society. Like Us on Facebook The research was funded by the UK Science and Technology Facilities Council. Professor Andrew King from the Department of Physics and Astronomy, University of Leicester, said: "Almost every galaxy has an enormously massive black hole in its centre. Our own galaxy, the Milky Way, has one about four million times heavier than the sun. But some galaxies have black holes a thousand times heavier still. We know they grew very quickly after the Big Bang.'' "These hugely massive black holes were already full--grown when the universe was very young, less than a tenth of its present age." Black holes grow by sucking in gas. This forms a disc around the hole and spirals in, but usually so slowly that the holes could not have grown to these huge masses in the entire age of the universe. 'We needed a faster mechanism,' says Chris Nixon, also at Leicester, "so we wondered what would happen if gas came in from different directions." Nixon, King and their colleague Daniel Price in Australia made a computer simulation of two gas discs orbiting a black hole at different angles. After a short time the discs spread and collide, and large amounts of gas fall into the hole. According to their calculations black holes can grow 1,000 times faster when this happens. "If two guys ride motorbikes on a Wall of Death and they collide, they lose the centrifugal force holding them to the walls and fall," says King. The same thing happens to the gas in these discs, and it falls in towards the hole. This may explain how these black holes got so big so fast. "We don't know exactly how gas flows inside galaxies in the early universe," said King, "but I think it is very promising that if the flows are chaotic it is very easy for the black hole to feed." The two biggest black holes ever discovered are each about ten billion times bigger than the Sun. Source: University of Leicester © 2012 iScience Times All rights reserved. Do not reproduce without permission.
To most amateur stargazers, Saturn, with its majestic belt of rings, is the most beautiful planet in our solar system. But to astronomers like Paul Dalba, Saturn is much more than just looks. Because it is a cold giant planet, a large planet that is more distant from the sun than Earth is, Saturn can offer insights into the study of exoplanets, those planets that orbit stars beyond our solar system. Scientists’ big hope for exoplanetary research is that it will lead to the discovery of an Earth-like planet that can support life. “Although we have not yet discovered an exoplanet with properties exactly matching Saturn’s,” says Dalba (GRS’18), a PhD astronomy student at Boston University’s Graduate School of Arts & Sciences, “a closer examination of Saturn holds important consequences for future studies of giant, cold exoplanets.” Astronomers have many ways to study these distant worlds. One of them, known as transmission spectroscopy, attempts to analyze their atmospheres by examining starlight that has passed through them. The differences between that light and light that has not passed through an exoplanet’s atmosphere can tell researchers about the density and the chemical makeup of the planet’s atmosphere. By studying Saturn as if it were an exoplanet, Dalba and his team hope to learn if transmission spectroscopy could be applied in the study of cold atmosphere exoplanets. Working with data collected by NASA’s Cassini Spacecraft, which has been orbiting Saturn and its many moons since 2004, Dalba and his team used ray tracing—a complex process of calculating the properties and direction of a ray of light as it travels from one point to another—to develop a model of Saturn’s atmosphere. Ray tracing works because in empty space, light travels in a straight line, but in an atmosphere, its path is curved by atmospheric refraction. To describe this deviance from a straight line, researchers must solve differential equations for each of the thousands of rays of light passing through Saturn’s atmosphere. Dalba set out to do that by running the algorithms on his MacBook Pro. “It wasn’t practical,” he says. “It was clear that it was going to take about a month.” Dalba’s advisor, Philip Muirhead, a BU College of Arts & Sciences assistant professor of astronomy, suggested that the researcher try running his algorithms on the Shared Computing Cluster (SCC) at the Massachusetts Green High Performance Computing Center in Holyoke, Mass., a joint venture among BU, Harvard, MIT, Northeastern University, and the University of Massachusetts. The SCC is operated remotely by the Boston University Research Computing Services group. The cluster, according to Wayne Gilmore, director of research computing services at BU’s Information Services & Technology, currently supports nearly 500 projects headed by 300 investigators from 66 departments. It is available to all BU researchers and is now used by about 2,000 of them, including graduate students. Dalba filled out what he says is a fairly simple application for use of the supercomputer, and he has since logged about 20,000 computer hours on the cluster. Instead of tracing one ray at a time, he found that he could use hundreds of computers to get his results hundreds of times faster. “Things that would have taken a month can now be done overnight,” he says. The cluster enabled precise measurements of the degree that light is bent as it passes through Saturn’s atmosphere, indicating the density of the atmosphere and the amount of energy lost as it passes through the atmosphere. The findings of Dalba and his team were published in the December 1, 2015, issue of The Astrophysical Journal. What did Dalba learn about Saturn’s atmosphere? It’s mainly methane, followed by acetylene, ethane, and other hydrocarbons. More important, he was able to answer the question of whether transmission spectroscopy could be applied in the study of cold atmosphere exoplanets. The answer is yes.
Islamic Golden Age The Islamic Golden Age was a period of cultural, economic and scientific flourishing in the history of Islam, traditionally dated from the 8th century to the 14th century. This period is traditionally understood to have begun during the reign of the Abbasid caliph Harun al-Rashid (786 to 809) with the inauguration of the House of Wisdom in Baghdad, where scholars from various parts of the world with different cultural backgrounds were mandated to gather and translate all of the world's classical knowledge into the Arabic language. This period is traditionally said to have ended with the collapse of the Abbasid caliphate due to Mongol invasions and the Siege of Baghdad in 1258 AD. A few contemporary scholars place the end of the Islamic Golden Age as late as the end of 15th to 16th centuries. History of the conceptsEdit The metaphor of a golden age began to be applied in 19th-century literature about Islamic history, in the context of the western aesthetic fashion known as Orientalism. The author of a Handbook for Travelers in Syria and Palestine in 1868 observed that the most beautiful mosques of Damascus were "like Mohammedanism itself, now rapidly decaying" and relics of "the golden age of Islam". There is no unambiguous definition of the term, and depending on whether it is used with a focus on cultural or on military achievement, it may be taken to refer to rather disparate time spans. Thus, one 19th century author would have it extend to the duration of the caliphate, or to "six and a half centuries", while another would have it end after only a few decades of Rashidun conquests, with the death of Umar and the First Fitna. During the early 20th century, the term was used only occasionally, and often referred to the early military successes of the Rashidun caliphs. It was only in the second half of the 20th century that the term came to be used with any frequency, now mostly referring to the cultural flourishing of science and mathematics under the caliphates during the 9th to 11th centuries (between the establishment of organised scholarship in the House of Wisdom and the beginning of the crusades), but often extended to include part of the late 8th or the 12th to early 13th centuries. Definitions may still vary considerably. Equating the end of the golden age with the end of the caliphates is a convenient cut-off point based on a historical landmark, but it can be argued that Islamic culture had entered a gradual decline much earlier; thus, Khan (2003) identifies the proper golden age as being the two centuries between 750–950, arguing that the beginning loss of territories under Harun al-Rashid worsened after the death of al-Ma'mun in 833, and that the crusades in the 12th century resulted in a weakening of the Islamic empire from which it never recovered. The various Quranic injunctions and Hadith, which place values on education and emphasize the importance of acquiring knowledge, played a vital role in influencing the Muslims of this age in their search for knowledge and the development of the body of science. The Islamic Empire heavily patronized scholars. The money spent on the Translation Movement for some translations is estimated to be equivalent to about twice the annual research budget of the United Kingdom’s Medical Research Council. The best scholars and notable translators, such as Hunayn ibn Ishaq, had salaries that are estimated to be the equivalent of professional athletes today. The House of Wisdom was a library established in Abbasid-era Baghdad, Iraq by Caliph al-Mansur. During this period, the Muslims showed a strong interest in assimilating the scientific knowledge of the civilizations that had been conquered. Many classic works of antiquity that might otherwise have been lost were translated from Greek, Persian, Indian, Chinese, Egyptian, and Phoenician civilizations into Arabic and Persian, and later in turn translated into Turkish, Hebrew, and Latin. Christians, especially the adherents of the Church of the East (Nestorians), contributed to Islamic civilization during the reign of the Ummayads and the Abbasids by translating works of Greek philosophers and ancient science to Syriac and afterwards to Arabic. They also excelled in many fields, in particular philosophy, science (such as Hunayn ibn Ishaq, Thabit Ibn Qurra, Yusuf Al-Khuri, Al Himsi, Qusta ibn Luqa, Masawaiyh, Patriarch Eutychius, and Jabril ibn Bukhtishu) and theology. For a long period of time the personal physicians of the Abbasid Caliphs were often Assyrian Christians. Among the most prominent Christian families to serve as physicians to the caliphs were the Bukhtishu dynasty. Throughout the 4th to 7th centuries, Christian scholarly work in the Greek and Syriac languages was either newly translated or had been preserved since the Hellenistic period. Among the prominent centers of learning and transmission of classical wisdom were Christian colleges such as the School of Nisibis and the School of Edessa, the pagan University of Harran and the renowned hospital and medical academy of Jundishapur, which was the intellectual, theological and scientific center of the Church of the East. The House of Wisdom was founded in Baghdad in 825, modelled after the Academy of Gondishapur. It was led by Christian physician Hunayn ibn Ishaq, with the support of Byzantine medicine. Many of the most important philosophical and scientific works of the ancient world were translated, including the work of Galen, Hippocrates, Plato, Aristotle, Ptolemy and Archimedes. Many scholars of the House of Wisdom were of Christian background. Among the various countries and cultures conquered through successive Islamic conquests, a remarkable number of scientists originated from Persia, who contributed immensely to the scientific flourishing of the Islamic Golden Age. According to Bernard Lewis: "Culturally, politically, and most remarkable of all even religiously, the Persian contribution to this new Islamic civilization is of immense importance. The work of Iranians can be seen in every field of cultural endeavor, including Arabic poetry, to which poets of Iranian origin composing their poems in Arabic made a very significant contribution." Science, medicine, philosophy and technology in the newly Islamized Iranian society was influenced by and based on the scientific model of the major pre-Islamic Iranian universities in the Sassanian Empire. During this period hundreds of scholars and scientists vastly contributed to technology, science and medicine, later influencing the rise of European science during the Renaissance. Most of the ḥadîth scholars who preserved traditions for the Muslims also were Persians, or Persian in language and upbringing, because the discipline was widely cultivated in the 'Irâq and the regions beyond. Furthermore all the scholars who worked in the science of the principles of jurisprudence were Persians. The same applies to speculative theologians and to most Qur'ân commentators. Only the Persians engaged in the task of preserving knowledge and writing systematic scholarly works. Thus, the truth of the following statement by the Prophet becomes apparent: 'If scholarship hung suspended in the highest parts of heaven, the Persians would attain it.' With a new and easier writing system, and the introduction of paper, information was democratized to the extent that, for probably the first time in history, it became possible to make a living from only writing and selling books. The use of paper spread from China into Muslim regions in the eighth century, arriving in Al-Andalus on the Iberian peninsula (modern Spain and Portugal) in the 10th century. It was easier to manufacture than parchment, less likely to crack than papyrus, and could absorb ink, making it difficult to erase and ideal for keeping records. Islamic paper makers devised assembly-line methods of hand-copying manuscripts to turn out editions far larger than any available in Europe for centuries. It was from these countries that the rest of the world learned to make paper from linen. The centrality of scripture and its study in the Islamic tradition helped to make education a central pillar of the religion in virtually all times and places in the history of Islam. The importance of learning in the Islamic tradition is reflected in a number of hadiths attributed to Muhammad, including one that instructs the faithful to "seek knowledge, even in China". This injunction was seen to apply particularly to scholars, but also to some extent to the wider Muslim public, as exemplified by the dictum of Al-Zarnuji, "learning is prescribed for us all". While it is impossible to calculate literacy rates in pre-modern Islamic societies, it is almost certain that they were relatively high, at least in comparison to their European counterparts. Education would begin at a young age with study of Arabic and the Quran, either at home or in a primary school, which was often attached to a mosque. Some students would then proceed to training in tafsir (Quranic exegesis) and fiqh (Islamic jurisprudence), which was seen as particularly important. Education focused on memorization, but also trained the more advanced students to participate as readers and writers in the tradition of commentary on the studied texts. It also involved a process of socialization of aspiring scholars, who came from virtually all social backgrounds, into the ranks of the ulema. For the first few centuries of Islam, educational settings were entirely informal, but beginning in the 11th and 12th centuries, the ruling elites began to establish institutions of higher religious learning known as madrasas in an effort to secure support and cooperation of the ulema. Madrasas soon multiplied throughout the Islamic world, which helped to spread Islamic learning beyond urban centers and to unite diverse Islamic communities in a shared cultural project. Nonetheless, instruction remained focused on individual relationships between students and their teacher. The formal attestation of educational attainment, ijaza, was granted by a particular scholar rather than the institution, and it placed its holder within a genealogy of scholars, which was the only recognized hierarchy in the educational system. While formal studies in madrasas were open only to men, women of prominent urban families were commonly educated in private settings and many of them received and later issued ijazas in hadith studies, calligraphy and poetry recitation. Working women learned religious texts and practical skills primarily from each other, though they also received some instruction together with men in mosques and private homes. Madrasas were devoted principally to study of law, but they also offered other subjects such as theology, medicine, and mathematics. The madrasa complex usually consisted of a mosque, boarding house, and a library. It was maintained by a waqf (charitable endowment), which paid salaries of professors, stipends of students, and defrayed the costs of construction and maintenance. The madrasa was unlike a modern college in that it lacked a standardized curriculum or institutionalized system of certification. Muslims distinguished disciplines inherited from pre-Islamic civilizations, such as philosophy and medicine, which they called "sciences of the ancients" or "rational sciences", from Islamic religious sciences. Sciences of the former type flourished for several centuries, and their transmission formed part of the educational framework in classical and medieval Islam. In some cases, they were supported by institutions such as the House of Wisdom in Baghdad, but more often they were transmitted informally from teacher to student. The University of Al Karaouine, founded in 859 AD, is listed in The Guinness Book Of Records as the world's oldest degree-granting university. The Al-Azhar University was another early university (madrasa). The madrasa is one of the relics of the Fatimid caliphate. The Fatimids traced their descent to Muhammad's daughter Fatimah and named the institution using a variant of her honorific title Al-Zahra (the brilliant). Organized instruction in the Al-Azhar Mosque began in 978. Juristic thought gradually developed in study circles, where independent scholars met to learn from a local master and discuss religious topics. At first, these circles were fluid in their membership, but with time distinct regional legal schools crystallized around shared sets of methodological principles. As the boundaries of the schools became clearly delineated, the authority of their doctrinal tenets came to be vested in a master jurist from earlier times, who was henceforth identified as the school's founder. In the course of the first three centuries of Islam, all legal schools came to accept the broad outlines of classical legal theory, according to which Islamic law had to be firmly rooted in the Quran and hadith. The classical theory of Islamic jurisprudence elaborates how scriptures should be interpreted from the standpoint of linguistics and rhetoric. It also comprises methods for establishing authenticity of hadith and for determining when the legal force of a scriptural passage is abrogated by a passage revealed at a later date. In addition to the Quran and sunnah, the classical theory of Sunni fiqh recognizes two other sources of law: juristic consensus (ijmaʿ) and analogical reasoning (qiyas). It therefore studies the application and limits of analogy, as well as the value and limits of consensus, along with other methodological principles, some of which are accepted by only certain legal schools. This interpretive apparatus is brought together under the rubric of ijtihad, which refers to a jurist's exertion in an attempt to arrive at a ruling on a particular question. The theory of Twelver Shia jurisprudence parallels that of Sunni schools with some differences, such as recognition of reason (ʿaql) as a source of law in place of qiyas and extension of the notion of sunnah to include traditions of the imams. The body of substantive Islamic law was created by independent jurists (muftis). Their legal opinions (fatwas) were taken into account by ruler-appointed judges who presided over qāḍī's courts, and by maẓālim courts, which were controlled by the ruler's council and administered criminal law. Classical Islamic theology emerged from an early doctrinal controversy which pitted the ahl al-hadith movement, led by Ahmad ibn Hanbal, who considered the Quran and authentic hadith to be the only acceptable authority in matters of faith, against Mu'tazilites and other theological currents, who developed theological doctrines using rationalistic methods. In 833 the caliph al-Ma'mun tried to impose Mu'tazilite theology on all religious scholars and instituted an inquisition (mihna), but the attempts to impose a caliphal writ in matters of religious orthodoxy ultimately failed. This controversy persisted until al-Ash'ari (874–936) found a middle ground between Mu'tazilite rationalism and Hanbalite literalism, using the rationalistic methods championed by Mu'tazilites to defend most substantive tenets maintained by ahl al-hadith. A rival compromise between rationalism and literalism emerged from the work of al-Maturidi (d. c. 944), and, although a minority of scholars remained faithful to the early ahl al-hadith creed, Ash'ari and Maturidi theology came to dominate Sunni Islam from the 10th century on. Ibn Sina (Avicenna) and Ibn Rushd (Averroes) played a major role in interpreting the works of Aristotle, whose ideas came to dominate the non-religious thought of the Christian and Muslim worlds. According to the Stanford Encyclopedia of Philosophy, translation of philosophical texts from Arabic to Latin in Western Europe "led to the transformation of almost all philosophical disciplines in the medieval Latin world". The influence of Islamic philosophers in Europe was particularly strong in natural philosophy, psychology and metaphysics, though it also influenced the study of logic and ethics. Avicenna argued his "Floating man" thought experiment concerning self-awareness, in which a man prevented of sense experience by being blindfolded and free falling would still be aware of his existence. In epistemology, Ibn Tufail wrote the novel Hayy ibn Yaqdhan and in response Ibn al-Nafis wrote the novel Theologus Autodidactus. Both were concerning autodidacticism as illuminated through the life of a feral child spontaneously generated in a cave on a desert island. Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī played a significant role in the development of algebra, arithmetic and Hindu-Arabic numerals.He has been described as the father or founder of algebra. Another Persian mathematician Omar Khayyam is credited with identifying the foundations of algebraic geometry and found the general geometric solution of the cubic equation. His book Treatise on Demonstrations of Problems of Algebra (1070), which laid down the principles of algebra, is part of the body of Persian mathematics that was eventually transmitted to Europe. Yet another Persian mathematician, Sharaf al-Dīn al-Tūsī, found algebraic and numerical solutions to various cases of cubic equations. He also developed the concept of a function. Islamic art makes use of geometric patterns and symmetries in many of its art forms, notably in girih tilings. These are formed using a set of five tile shapes, namely a regular decagon, an elongated hexagon, a bow tie, a rhombus, and a regular pentagon. All the sides of these tiles have the same length; and all their angles are multiples of 36° (π/5 radians), offering fivefold and tenfold symmetries. The tiles are decorated with strapwork lines (girih), generally more visible than the tile boundaries. In 2007, the physicists Peter Lu and Paul Steinhardt argued that girih from the 15th century resembled quasicrystalline Penrose tilings. Elaborate geometric zellige tilework is a distinctive element in Moroccan architecture. Muqarnas vaults are three-dimensional but were designed in two dimensions with drawings of geometrical cells. Ibn Muʿādh al-Jayyānī is one of several Islamic mathematicians to whom the law of sines is attributed; he wrote his The Book of Unknown Arcs of a Sphere in the 11th century. This formula relates the lengths of the sides of any triangle, rather than only right triangles, to the sines of its angles. According to the law, where a, b, and c are the lengths of the sides of a triangle, and A, B, and C are the opposite angles (see figure). Alhazen discovered the sum formula for the fourth power, using a method that could be generally used to determine the sum for any integral power. He used this to find the volume of a paraboloid. He could find the integral formula for any polynomial without having developed a general formula. Ibn al-Haytham (Alhazen) was a significant figure in the history of scientific method, particularly in his approach to experimentation, and has been described as the "world's first true scientist". Avicenna made rules for testing the effectiveness of drugs, including that the effect produced by the experimental drug should be seen constantly or after many repetitions, to be counted. The physician Rhazes was an early proponent of experimental medicine and recommended using control for clinical research. He said: "If you want to study the effect of bloodletting on a condition, divide the patients into two groups, perform bloodletting only on one group, watch both, and compare the results." In about 964 AD, the Persian astronomer Abd al-Rahman al-Sufi, writing in his Book of Fixed Stars, described a "nebulous spot" in the Andromeda constellation, the first definitive reference to what we now know is the Andromeda Galaxy, the nearest spiral galaxy to our galaxy. Nasir al-Din al-Tusi invented a geometrical technique called a Tusi-couple, which generates linear motion from the sum of two circular motions to replace Ptolemy's problematic equant. The Tusi couple was later employed in Ibn al-Shatir's geocentric model and Nicolaus Copernicus' heliocentric Copernican model although it is not known who the intermediary is or if Copernicus rediscovered the technique independently. Alhazen played a role in the development of optics. One of the prevailing theories of vision in his time and place was the emission theory supported by Euclid and Ptolemy, where sight worked by the eye emitting rays of light, and the other was the Aristotelean theory that sight worked when the essence of objects flows into the eyes. Alhazen correctly argued that vision occurred when light, traveling in straight lines, reflects off an object into the eyes. Al-Biruni wrote of his insights into light, stating that its velocity must be immense when compared to the speed of sound. In the cardiovascular system, Ibn al-Nafis in his Commentary on Anatomy in Avicenna's Canon was the first known scholar to contradict the contention of the Galen School that blood could pass between the ventricles in the heart through the cardiac inter-ventricular septum that separates them, saying that there is no passage between the ventricles at this point. Instead, he correctly argued that all the blood that reached the left ventricle did so after passing through the lung. He also stated that there must be small communications, or pores, between the pulmonary artery and pulmonary vein, a prediction that preceded the discovery of the pulmonary capillaries of Marcello Malpighi by 400 years. The Commentary was rediscovered in the twentieth century in the Prussian State Library in Berlin; whether its view of the pulmonary circulation influenced scientists such as Michael Servetus is unclear. In the nervous system, Rhazes stated that nerves had motor or sensory functions, describing 7 cranial and 31 spinal cord nerves. He assigned a numerical order to the cranial nerves from the optic to the hypoglossal nerves. He classified the spinal nerves into 8 cervical, 12 thoracic, 5 lumbar, 3 sacral, and 3 coccygeal nerves. He used this to link clinical signs of injury to the corresponding location of lesions in the nervous system. Modern commentators have likened medieval accounts of the "struggle for existence" in the animal kingdom to the framework of the theory of evolution. Thus, in his survey of the history of the ideas which led to the theory of natural selection, Conway Zirkle noted that al-Jahiz was one of those who discussed a "struggle for existence", in his Kitāb al-Hayawān (Book of Animals), written in the 9th century. In the 13th century, Nasir al-Din al-Tusi believed that humans were derived from advanced animals, saying, "Such humans [probably anthropoid apes] live in the Western Sudan and other distant corners of the world. They are close to animals by their habits, deeds and behavior." In 1377, Ibn Khaldun in his Muqaddimah stated, "The animal kingdom was developed, its species multiplied, and in the gradual process of Creation, it ended in man and arising from the world of the monkeys." The Banū Mūsā brothers, in their Book of Ingenious Devices, describe an automatic flute player which may have been the first programmable machine. The flute sounds were produced through hot steam and the user could adjust the device to various patterns so that they could get various sounds from it. This section needs expansion. You can help by adding to it. (January 2018) The earliest known Islamic hospital was built in 805 in Baghdad by order of Harun Al-Rashid, and the most important of Baghdad's hospitals was established in 982 by the Buyid ruler 'Adud al-Dawla. The best documented early Islamic hospitals are the great Syro-Egyptian establishments of the 12th and 13th centuries. By the tenth century, Baghdad had five more hospitals, while Damascus had six hospitals by the 15th century and Córdoba alone had 50 major hospitals, many exclusively for the military. The typical hospital was divided into departments such as systemic diseases, surgery, and orthopedics, with larger hospitals having more diverse specialties. "Systemic diseases" was the rough equivalent of today's internal medicine and was further divided into sections such as fever, infections and digestive issues. Every department had an officer-in-charge, a presiding officer and a supervising specialist. The hospitals also had lecture theaters and libraries. Hospitals staff included sanitary inspectors, who regulated cleanliness, and accountants and other administrative staff. The hospitals were typically run by a three-man board comprising a non-medical administrator, the chief pharmacist, called the shaykh saydalani, who was equal in rank to the chief physician, who served as mutwalli (dean). Medical facilities traditionally closed each night, but by the 10th century laws were passed to keep hospitals open 24 hours a day. For less serious cases, physicians staffed outpatient clinics. Cities also had first aid centers staffed by physicians for emergencies that were often located in busy public places, such as big gatherings for Friday prayers. The region also had mobile units staffed by doctors and pharmacists who were supposed to meet the need of remote communities. Baghdad was also known to have a separate hospital for convicts since the early 10th century after the vizier ‘Ali ibn Isa ibn Jarah ibn Thabit wrote to Baghdad’s chief medical officer that "prisons must have their own doctors who should examine them every day". The first hospital built in Egypt, in Cairo's Southwestern quarter, was the first documented facility to care for mental illnesses. In Aleppo's Arghun Hospital, care for mental illness included abundant light, fresh air, running water and music. Medical students would accompany physicians and participate in patient care. Hospitals in this era were the first to require medical diplomas to license doctors. The licensing test was administered by the region's government appointed chief medical officer. The test had two steps; the first was to write a treatise, on the subject the candidate wished to obtain a certificate, of original research or commentary of existing texts, which they were encouraged to scrutinize for errors. The second step was to answer questions in an interview with the chief medical officer. Physicians worked fixed hours and medical staff salaries were fixed by law. For regulating the quality of care and arbitrating cases, it is related that if a patient dies, their family presents the doctor's prescriptions to the chief physician who would judge if the death was natural or if it was by negligence, in which case the family would be entitled to compensation from the doctor. The hospitals had male and female quarters while some hospitals only saw men and other hospitals, staffed by women physicians, only saw women. While women physicians practiced medicine, many largely focused on obstetrics. Hospitals were forbidden by law to turn away patients who were unable to pay. Eventually, charitable foundations called waqfs were formed to support hospitals, as well as schools. Part of the state budget also went towards maintaining hospitals. While the services of the hospital were free for all citizens and patients were sometimes given a small stipend to support recovery upon discharge, individual physicians occasionally charged fees. In a notable endowment, a 13th-century governor of Egypt Al-Mansur Qalawun ordained a foundation for the Qalawun hospital that would contain a mosque and a chapel, separate wards for different diseases, a library for doctors and a pharmacy and the hospital is used today for ophthalmology. The Qalawun hospital was based in a former Fatimid palace which had accommodation for 8,000 people – "it served 4,000 patients daily." The waqf stated, "...The hospital shall keep all patients, men and women, until they are completely recovered. All costs are to be borne by the hospital whether the people come from afar or near, whether they are residents or foreigners, strong or weak, low or high, rich or poor, employed or unemployed, blind or sighted, physically or mentally ill, learned or illiterate. There are no conditions of consideration and payment, none is objected to or even indirectly hinted at for non-payment." By the ninth century, there was a rapid expansion of private pharmacies in many Muslim cities. Initially, these were unregulated and managed by personnel of inconsistent quality. Decrees by Caliphs Al-Ma'mun and Al-Mu'tasim required examinations to license pharmacists and pharmacy students were trained in a combination of classroom exercises coupled with day-to-day practical experiences with drugs. To avoid conflicts of interest, doctors were banned from owning or sharing ownership in a pharmacy. Pharmacies were periodically inspected by government inspectors called muhtasib, who checked to see that the medicines were mixed properly, not diluted and kept in clean jars. Violators were fined or beaten. The theory of Humorism was largely dominant during this time. Arab physician Ibn Zuhr provided proof that scabies is caused by the itch mite and that it can be cured by removing the parasite without the need for purging, bleeding or other treatments called for by humorism, making a break with the humorism of Galen and Ibn Sina. Rhazes differentiated through careful observation the two diseases smallpox and measles, which were previously lumped together as a single disease that caused rashes. This was based on location and the time of the appearance of the symptoms and he also scaled the degree of severity and prognosis of infections according to the color and location of rashes. Al-Zahrawi was the first physician to describe an ectopic pregnancy, and the first physician to identify the hereditary nature of haemophilia. On hygienic practices, Rhazes, who was once asked to choose the site for a new hospital in Baghdad, suspended pieces of meat at various points around the city, and recommended building the hospital at the location where the meat putrefied most slowly. For Islamic scholars, Indian and Greek physicians and medical researchers Sushruta, Galen, Mankah, Atreya, Hippocrates, Charaka, and Agnivesa were pre-eminent authorities. In order to make the Indian and Greek tradition more accessible, understandable, and teachable, Islamic scholars ordered and made more systematic the vast Indian and Greco-Roman medical knowledge by writing encyclopedias and summaries. Sometimes, past scholars were criticized, like Rhazes who criticized and refuted Galen's revered theories, most notably, the Theory of Humors and was thus accused of ignorance. It was through 12th-century Arabic translations that medieval Europe rediscovered Hellenic medicine, including the works of Galen and Hippocrates, and discovered ancient Indian medicine, including the works of Sushruta and Charaka. Works such as Avicenna's The Canon of Medicine were translated into Latin and disseminated throughout Europe. During the 15th and 16th centuries alone, The Canon of Medicine was published more than thirty-five times. It was used as a standard medical textbook through the 18th century in Europe. Al-Zahrawi was a tenth century Arab physician. He is sometimes referred to as the "Father of surgery". He describes what is thought to be the first attempt at reduction mammaplasty for the management of gynaecomastia and the first mastectomy to treat breast cancer. He is credited with the performance of the first thyroidectomy. Commerce and travelEdit Apart from the Nile, Tigris, and Euphrates, navigable rivers were uncommon in the Middle East, so transport by sea was very important. Navigational sciences were highly developed, making use of a rudimentary sextant (known as a kamal). When combined with detailed maps of the period, sailors were able to sail across oceans rather than skirt along the coast. Muslim sailors were also responsible for reintroducing large, three-masted merchant vessels to the Mediterranean. The name caravel may derive from an earlier Arab boat known as the qārib. Many Muslims went to China to trade, and these Muslims began to have a great economic influence on the country. Muslims virtually dominated the import/export industry by the time of the Sung dynasty (960–1279). Arts and cultureEdit Literature and poetryEdit Manuscript illumination was an important art, and Persian miniature painting flourished in the Persianate world. Calligraphy, an essential aspect of written Arabic, developed in manuscripts and architectural decoration. The Great Mosque of Kairouan (in Tunisia), the ancestor of all the mosques in the western Islamic world excluding Turkey and the Balkans, is one of the best preserved and most significant examples of early great mosques. Founded in 670, it dates in its present form largely from the 9th century. The Great Mosque of Kairouan is constituted of a three-tiered square minaret, a large courtyard surrounded by colonnaded porticos, and a huge hypostyle prayer hall covered on its axis by two cupolas. The beginning of construction of the Great Mosque at Cordoba in 785 marked the beginning of Islamic architecture in Spain and Northern Africa. The mosque is noted for its striking interior arches. Moorish architecture reached its peak with the construction of the Alhambra, the magnificent palace/fortress of Granada, with its open and breezy interior spaces adorned in red, blue, and gold. The walls are decorated with stylized foliage motifs, Arabic inscriptions, and arabesque design work, with walls covered in geometrically patterned glazed tiles. In 1206, Genghis Khan established a powerful dynasty among the Mongols of central Asia. During the 13th century, this Mongol Empire conquered most of the Eurasian land mass, including China in the east and much of the old Islamic caliphate (as well as Kievan Rus') in the west. The destruction of Baghdad and the House of Wisdom by Hulagu Khan in 1258 has been seen by some as the end of the Islamic Golden Age. The Ottoman conquest of the Arabic-speaking Middle East in 1516-17 placed the traditional heart of the Islamic world under Ottoman Turkish control. The rational sciences continued to flourish in the Middle East during the Ottoman period. To account for the decline of Islamic science, it has been argued that the Sunni Revival in the 11th and 12th centuries produced a series of institutional changes that decreased the relative payoff to producing scientific works. With the spread of madrasas and the greater influence of religious leaders, it became more lucrative to produce religious knowledge. This is easily refutable, as the scholars of the golden age were experts in both religious and secular fields, with many of the Islamic schools of thoughts having been established during the golden age itself. Ahmad Y. al-Hassan has rejected the thesis that lack of creative thinking was a cause, arguing that science was always kept separate from religious argument; he instead analyzes the decline in terms of economic and political factors, drawing on the work of the 14th-century writer Ibn Khaldun. Al-Hassan extended the golden age up to the 16th century, noting that scientific activity continued to flourish up until then. Several other contemporary scholars have also extended it to around the 16th to 17th centuries, and analysed the decline in terms of political and economic factors. More recent research has challenged the notion that it underwent decline even at that time, citing a revival of works produced on rational scientific topics during the seventeenth century. Economic historian Joel Mokyr has argued that Islamic philosopher al-Ghazali (1058–1111) "was a key figure in the decline in Islamic science", as his works contributed to rising mysticism and occasionalism in the Islamic world. Against this view, Saliba (2007) has given a number of examples especially of astronomical research flourishing after the time of al-Ghazali. - Baghdad School of art - Christian influences in Islam - Dutch Golden Age - Emirate of Sicily - Golden age of Jewish culture in Spain - Ibn Sina Academy of Medieval Medicine and Sciences - Islamic astronomy - Islamic studies - List of Iranian scientists - Ophthalmology in medieval Islam - Timeline of Islamic science and technology - "...regarded by some Westerners as the true father of historiography and sociology". - "Ibn Khaldun has been claimed the forerunner of a great number of European thinkers, mostly sociologists, historians, and philosophers".(Boulakia 1971) - "The founding father of Eastern Sociology". - "This grand scheme to find a new science of society makes him the forerunner of many of the eighteenth and nineteenth centuries system-builders such as Vico, Comte and Marx." "As one of the early founders of the social sciences...". - "He is considered by some as a father of modern economics, or at least a major forerunner. The Western world recognizes Khaldun as the father of sociology but hesitates in recognizing him as a great economist who laid its very foundations. He was the first to systematically analyze the functioning of an economy, the importance of technology, specialization and foreign trade in economic surplus and the role of government and its stabilization policies to increase output and employment. Moreover, he dealt with the problem of optimum taxation, minimum government services, incentives, institutional framework, law and order, expectations, production, and the theory of value".Cosma, Sorinel (2009). "Ibn Khaldun's Economic Thinking". Ovidius University Annals of Economics (Ovidius University Press) XIV:52–57 - George Saliba (1994), A History of Arabic Astronomy: Planetary Theories During the Golden Age of Islam, pp. 245, 250, 256–57. New York University Press, ISBN 0-8147-8023-7. - King, David A. (1983). "The Astronomy of the Mamluks". Isis. 74 (4): 531–55. doi:10.1086/353360. - Hassan, Ahmad Y (1996). "Factors Behind the Decline of Islamic Science After the Sixteenth Century". In Sharifah Shifa Al-Attas. Islam and the Challenge of Modernity, Proceedings of the Inaugural Symposium on Islam and the Challenge of Modernity: Historical and Contemporary Contexts, Kuala Lumpur, August 1–5, 1994. International Institute of Islamic Thought and Civilization (ISTAC). pp. 351–99. Archived from the original on 2 April 2015. - Medieval India, NCERT, ISBN 81-7450-395-1 - Vartan Gregorian, "Islam: A Mosaic, Not a Monolith", Brookings Institution Press, 2003, pp. 26–38 ISBN 0-8157-3283-X - Islamic Radicalism and Multicultural Politics. Taylor & Francis. 2011-03-01. p. 9. ISBN 978-1-136-95960-8. Retrieved 26 August 2012. - Josias Leslie Porter, A Handbook for Travelers in Syria and Palestine, 1868, p. 49. - "For six centuries and a half, through the golden age of Islam, lasted this Caliphate, till extinguished by the Osmanli sultans and in the death of the last of the blood of the house of Mahomet. The true Caliphate ended with the fall of Bagdad". New Outlook, Volume 45, 1892, p. 370. - "the golden age of Islam, as Mr. Gilman points out, ended with Omar, the second of the Kalifs." The Literary World, Volume 36, 1887, p. 308. - "The Ninth, Tenth and Eleventh centuries were the golden age of Islam" Life magazine, 9 May 1955, . - so Linda S. George, The Golden Age of Islam, 1998: "from the last years of the eighth century to the thirteenth century." - Arshad Khan, Islam, Muslims, and America: Understanding the Basis of Their Conflict, 2003, p. 19. - Groth, Hans, ed. (2012). Population Dynamics in Muslim Countries: Assembling the Jigsaw. Springer Science & Business Media. p. 45. ISBN 978-3-642-27881-5. - Rafiabadi, Hamid Naseem, ed. (2007). Challenges to Religions and Islam: A Study of Muslim Movements, Personalities, Issues and Trends, Part 1. Sarup & Sons. p. 1141. ISBN 978-81-7625-732-9. - Salam, Abdus (1994). Renaissance of Sciences in Islamic Countries. p. 9. ISBN 978-9971-5-0946-0. - "In Our Time – Al-Kindi, James Montgomery". bbcnews.com. 28 June 2012. Archived from the original on 2014-01-14. Retrieved May 18, 2013. - Brentjes, Sonja; Robert G. Morrison (2010). "The Sciences in Islamic societies". The New Cambridge History of Islam. 4. Cambridge: Cambridge University Press. p. 569. - Hill, Donald. Islamic Science and Engineering. 1993. Edinburgh Univ. Press. ISBN 0-7486-0455-3, p. 4 - "Nestorian – Christian sect". Archived from the original on 2016-10-28. Retrieved 2016-11-05. - Rashed, Roshdi (2015). Classical Mathematics from Al-Khwarizmi to Descartes. Routledge. p. 33. ISBN 978-0-415-83388-2. - "Hunayn ibn Ishaq – Arab scholar". Archived from the original on 2016-05-31. Retrieved 2016-07-12. - Hussein, Askary. "Baghdad 767–1258 A.D.:Melting Pot for a Universal Renaissance". Executive Intelligence Review. Archived from the original on 2017-08-24. - O'Leary, Delacy (1949). How Greek Science Passed On To The Arabs. Nature. 163. p. 748. Bibcode:1949Natur.163Q.748T. doi:10.1038/163748c0. ISBN 978-1-317-84748-9. - Sarton, George. "History of Islamic Science". Archived from the original on 2016-08-12. - Nancy G. Siraisi, Medicine and the Italian Universities, 1250–1600 (Brill Academic Publishers, 2001), p 134. - Beeston, Alfred Felix Landon (1983). Arabic literature to the end of the Umayyad period. Cambridge University Press. p. 501. ISBN 978-0-521-24015-4. Retrieved 20 January 2011. - "Compendium of Medical Texts by Mesue, with Additional Writings by Various Authors". World Digital Library. Archived from the original on 2014-03-04. Retrieved 2014-03-01. - Griffith, Sidney H. (15 December 1998). "Eutychius of Alexandria". Encyclopædia Iranica. Archived from the original on 2017-01-02. Retrieved 2011-02-07. - Anna Contadini, 'A Bestiary Tale: Text and Image of the Unicorn in the Kitāb naʿt al-hayawān (British Library, or. 2784)', Muqarnas, 20 (2003), 17–33 (p. 17), JSTOR 1523325. - Bonner, Bonner; Ener, Mine; Singer, Amy (2003). Poverty and charity in Middle Eastern contexts. SUNY Press. p. 97. ISBN 978-0-7914-5737-5. - Ruano, Eloy Benito; Burgos, Manuel Espadas (1992). 17e Congrès international des sciences historiques: Madrid, du 26 août au 2 septembre 1990. Comité international des sciences historiques. p. 527. ISBN 978-84-600-8154-8. - Rémi Brague, Assyrians contributions to the Islamic civilization Archived 2013-09-27 at the Wayback Machine - Britannica, Nestorian Archived 2014-03-30 at the Wayback Machine - Foster, John (1939). The Church of the T'ang Dynasty. Great Britain: Society for Promoting Christian Knowledge. p. 31. The school was twice closed, in 431 and 489 - The School of Edessa Archived 2016-09-02 at the Wayback Machine, Nestorian.org. - Frew, Donald. "Harran: Last Refuge of Classical Paganism". The Pomegranate: The International Journal of Pagan Studies. 13 (9): 17–29. - "Harran University". Archived from the original on 2018-01-27. - University of Tehran Overview/Historical Events Archived 2011-02-03 at the Wayback Machine - Kaser, Karl The Balkans and the Near East: Introduction to a Shared History p. 135. - Yazberdiyev, Dr. Almaz Libraries of Ancient Merv Archived 2016-03-04 at the Wayback Machine Dr. Yazberdiyev is Director of the Library of the Academy of Sciences of Turkmenistan, Ashgabat. - Hyman and Walsh Philosophy in the Middle Ages Indianapolis, 1973, p. 204' Meri, Josef W. and Jere L. Bacharach, Editors, Medieval Islamic Civilization Vol. 1, A–K, Index, 2006, p. 304. - Lewis, Bernard (2004). From Babel to Dragomans: Interpreting the Middle East. Oxford University Press. p. 44. - Kühnel E., in Zeitschrift der deutschen morgenländischen Gesell, Vol. CVI (1956) - Khaldun, Ibn (1981) , Muqaddimah, 1, translated by Rosenthal, Franz, Princeton University Press, pp. 429–430 - "In Our Time – Al-Kindi, Hugh Kennedy". bbcnews.com. 28 June 2012. Archived from the original on 2014-01-14. Retrieved May 18, 2013. - "Islam's Gift of Paper to the West". Web.utk.edu. 2001-12-29. Archived from the original on 2015-05-03. Retrieved 2014-04-11. - Kevin M. Dunn, Caveman chemistry : 28 projects, from the creation of fire to the production of plastics. Universal-Publishers. 2003. p. 166. ISBN 978-1-58112-566-5. Retrieved 2014-04-11. - Jonathan Berkey (2004). "Education". In Richard C. Martin. Encyclopedia of Islam and the Muslim World. MacMillan Reference USA. - Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). p. 210. ISBN 978-0-521-51430-9. - Berkey, Jonathan Porter (2003). The Formation of Islam: Religion and Society in the Near East, 600–1800. Cambridge University Press. p. 227. - Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). p. 217. ISBN 978-0-521-51430-9. - Hallaq, Wael B. (2009). An Introduction to Islamic Law. Cambridge University Press. p. 50. - The Guinness Book Of Records, Published 1998, ISBN 0-553-57895-2, p. 242 - Halm, Heinz. The Fatimids and their Traditions of Learning. London: The Institute of Ismaili Studies and I.B. Tauris. 1997. - Donald Malcolm Reid (2009). "Al-Azhar". In John L. Esposito. The Oxford Encyclopedia of the Islamic World. Oxford: Oxford University Press. doi:10.1093/acref/9780195305135.001.0001. ISBN 978-0-19-530513-5. - Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). p. 125. ISBN 978-0-521-51430-9. - Hallaq, Wael B. (2009). An Introduction to Islamic Law. Cambridge University Press. pp. 31–35. - Vikør, Knut S. (2014). "Sharīʿah". In Emad El-Din Shahin. The Oxford Encyclopedia of Islam and Politics. Oxford University Press. Archived from the original on 2017-02-02. Retrieved 2017-07-30. - Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). p. 130. ISBN 978-0-521-51430-9. - Calder, Norman (2009). "Law. Legal Thought and Jurisprudence". In John L. Esposito. The Oxford Encyclopedia of the Islamic World. Oxford: Oxford University Press. Archived from the original on 2017-07-31. Retrieved 2017-07-30. - Ziadeh, Farhat J. (2009). "Uṣūl al-fiqh". In John L. Esposito. The Oxford Encyclopedia of the Islamic World. Oxford: Oxford University Press. doi:10.1093/acref/9780195305135.001.0001. ISBN 978-0-19-530513-5. - Kamali, Mohammad Hashim (1999). John Esposito, ed. Law and Society. The Oxford History of Islam. Oxford University Press (Kindle edition). pp. 121–22. - Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). pp. 130–31. ISBN 978-0-521-51430-9. - Blankinship, Khalid (2008). Tim Winter, ed. The early creed. The Cambridge Companion to Classical Islamic Theology. Cambridge University Press (Kindle edition). p. 53. - Tamara Sonn (2009). "Tawḥīd". In John L. Esposito. The Oxford Encyclopedia of the Islamic World. Oxford: Oxford University Press. doi:10.1093/acref/9780195305135.001.0001. ISBN 978-0-19-530513-5. - Dag Nikolaus Hasse (2014). "Influence of Arabic and Islamic Philosophy on the Latin West". Stanford Encyclopedia of Philosophy. Archived from the original on 2017-10-20. Retrieved 2017-07-31. - "In Our Time: Existence". bbcnews.com. 8 November 2007. Archived from the original on 2013-10-17. Retrieved 27 March 2013. - Boyer, Carl B., 1985. A History of Mathematics, p. 252. Princeton University Press. - S Gandz, The sources of al-Khwarizmi's algebra, Osiris, i (1936), 263–277 - https://eclass.uoa.gr/modules/document/file.php/MATH104/20010-11/HistoryOfAlgebra.pdf,[permanent dead link] "The first true algebra text which is still extant is the work on al-jabr and al-muqabala by Mohammad ibn Musa al-Khwarizmi, written in Baghdad around 825" - Esposito, John L. (2000-04-06). The Oxford History of Islam. Oxford University Press. p. 188. ISBN 978-0-19-988041-6. - Mathematical Masterpieces: Further Chronicles by the Explorers, p. 92 - O'Connor, John J.; Robertson, Edmund F., "Sharaf al-Din al-Muzaffar al-Tusi", MacTutor History of Mathematics archive, University of St Andrews. - Victor J. Katz, Bill Barton; Barton, Bill (October 2007). "Stages in the History of Algebra with Implications for Teaching". Educational Studies in Mathematics. 66 (2): 185–201 . doi:10.1007/s10649-006-9023-7. - Peter J. Lu; Paul J. Steinhardt (2007). "Decagonal and Quasi-crystalline Tilings in Medieval Islamic Architecture". Science. 315 (5815): 1106–10. Bibcode:2007Sci...315.1106L. doi:10.1126/science.1135491. PMID 17322056. - "Advanced geometry of Islamic art". bbcnews.com. 23 February 2007. Archived from the original on 2013-02-19. Retrieved July 26, 2013. - Ball, Philip (22 February 2007). "Islamic tiles reveal sophisticated maths". News@nature. doi:10.1038/news070219-9. Archived from the original on 2013-08-01. Retrieved July 26, 2013. "Although they were probably unaware of the mathematical properties and consequences of the construction rule they devised, they did end up with something that would lead to what we understand today to be a quasi-crystal." - "Nobel goes to scientist who knocked down 'Berlin Wall' of chemistry". cnn.com. 16 October 2011. Archived from the original on 2014-04-13. Retrieved July 26, 2013. - Castera, Jean Marc; Peuriot, Francoise (1999). Arabesques. Decorative Art in Morocco. Art Creation Realisation. ISBN 978-2-86770-124-5. - van den Hoeven, Saskia, van der Veen, Maartje. "Muqarnas-Mathematics in Islamic Arts" (PDF). Archived from the original (PDF) on 27 September 2013. Retrieved 15 January 2016.CS1 maint: Multiple names: authors list (link) - "Abu Abd Allah Muhammad ibn Muadh Al-Jayyani". University of St.Andrews. Archived from the original on 2017-01-02. Retrieved 27 July 2013. - Katz, Victor J. (1995). "Ideas of Calculus in Islam and India". Mathematics Magazine. 68 (3): 163–74 [165–69, 173–74]. Bibcode:1975MathM..48...12G. doi:10.2307/2691411. JSTOR 2691411. - El-Bizri, Nader, "A Philosophical Perspective on Ibn al-Haytham's Optics", Arabic Sciences and Philosophy 15 (2005-08-05), 189–218 - Haq, Syed (2009). "Science in Islam". Oxford Dictionary of the Middle Ages. ISSN 1703-7603. Retrieved 2014-10-22. - Sabra, A.I. (1989). The Optics of Ibn al-Haytham. Books I–II–III: On Direct Vision. London: The Warburg Institute, University of London. pp. 25–29. ISBN 0-85481-072-2. - Toomer, G.J. (1964). "Review: Ibn al-Haythams Weg zur Physik by Matthias Schramm". Isis. 55 (4): 463–65. doi:10.1086/349914. - Al-Khalili, Jim (2009-01-04). "BBC News". BBC News. Archived from the original on 2015-05-03. Retrieved 2014-04-11. - "The Islamic roots of modern pharmacy". aramcoworld.com. Archived from the original on 2016-05-18. Retrieved 2016-05-28.[better source needed] - Hajar, R (2013). "The Air of History (Part IV): Great Muslim Physicians Al Rhazes". Heart Views. 14 (2): 93–95. doi:10.4103/1995-705X.115499. PMC 3752886. PMID 23983918. - Henbest, N.; Couper, H. (1994). The guide to the galaxy. p. 31. ISBN 978-0-521-45882-5. - Craig G. Fraser, 'The cosmos: a historical perspective', Greenwood Publishing Group, 2006 p. 39 - George Saliba, 'Revisiting the Astronomical Contacts Between the World of Islam and Renaissance Europe: The Byzantine Connection', 'The occult sciences in Byzantium', 2006, p. 368 - J J O'Connor; E F Robertson (1999). "Abu Arrayhan Muhammad ibn Ahmad al-Biruni". MacTutor History of Mathematics archive. University of St Andrews. Archived from the original on 21 November 2016. Retrieved 17 July 2017. - Felix Klein-Frank (2001) Al-Kindi. In Oliver Leaman & Hossein Nasr. History of Islamic Philosophy. London: Routledge. page 174 - Pingree, David (1985). "Bīrūnī, Abū Rayḥān iv. Geography". Encyclopaedia Iranica. Columbia University. ISBN 978-1-56859-050-9. - West, John (2008). "Ibn al-Nafis, the pulmonary circulation, and the Islamic Golden Age". Journal of Applied Physiology. 105 (6): 1877–80. doi:10.1152/japplphysiol.91171.2008. PMC 2612469. PMID 18845773. Archived from the original on 2014-09-06. Retrieved 28 May 2014. - Souayah, N; Greenstein, JI (2005). "Insights into neurologic localization by Rhazes, a medieval Islamic physician". Neurology. 65 (1): 125–28. doi:10.1212/01.wnl.0000167603.94026.ee. PMID 16009898. - Zirkle, Conway (25 April 1941). "Natural Selection before the "Origin of Species"". Proceedings of the American Philosophical Society. 84 (1): 71–123. JSTOR 984852. - Farid Alakbarov (Summer 2001). A 13th-Century Darwin? Tusi's Views on Evolution Archived 2010-12-13 at the Wayback Machine, Azerbaijan International 9 (2). - "Rediscovering Arabic Science". Saudi Aramco Magazine. Archived from the original on 2014-10-30. Retrieved 13 July 2016. - Koetsier, Teun (2001), "On the prehistory of programmable machines: musical automata, looms, calculators", Mechanism and Machine Theory, 36 (5): 589–603, doi:10.1016/S0094-114X(01)00005-2. - Banu Musa (authors), Donald Routledge Hill (translator) (1979), The book of ingenious devices (Kitāb al-ḥiyal), Springer, pp. 76–77, ISBN 978-90-277-0833-5 - * Spengler, Joseph J. (1964). "Economic Thought of Islam: Ibn Khaldun". Comparative Studies in Society and History. 6 (3): 268–306. JSTOR 177577. . • Boulakia, Jean David C. (1971). "Ibn Khaldûn: A Fourteenth-Century Economist". Journal of Political Economy. 79 (5): 1105–18. JSTOR 1830276.. - Savage-Smith, Emilie, Klein-Franke, F. and Zhu, Ming (2012). "Ṭibb". In P. Bearman, Th. Bianquis, C.E. Bosworth, E. van Donzel, W.P. Heinrichs. Encyclopaedia of Islam (2nd ed.). Brill. doi:10.1163/1573-3912_islam_COM_1216.CS1 maint: Uses authors parameter (link) CS1 maint: Uses editors parameter (link) - "The Islamic Roots of the Modern Hospital". aramcoworld.com. Archived from the original on 2017-03-21. Retrieved 20 March 2017.[better source needed] - Rise and spread of Islam. Gale. 2002. p. 419. ISBN 978-0-7876-4503-8. - Alatas, Syed Farid (2006). "From Jami'ah to University: Multiculturalism and Christian–Muslim Dialogue". Current Sociology. 54 (1): 112–32. doi:10.1177/0011392106058837. - "Pioneer Muslim Physicians". aramcoworld.com. Archived from the original on 2017-03-21. Retrieved 20 March 2017.[better source needed] - Philip Adler; Randall Pouwels (2007). World Civilizations. Cengage Learning. p. 198. ISBN 978-1-111-81056-6. Retrieved 1 June 2014. - Bedi N. Şehsuvaroǧlu (2012-04-24). "Bīmāristān". In P. Bearman; Th. Bianquis; C.E. Bosworth; et al. Encyclopaedia of Islam (2nd ed.). Archived from the original on 2016-09-20. Retrieved 5 June 2014. - Mohammad Amin Rodini (7 July 2012). "Medical Care in Islamic Tradition During the Middle Ages" (PDF). International Journal of Medicine and Molecular Medicine. Archived (PDF) from the original on 2013-10-25. Retrieved 9 June 2014. - "Abu Bakr Mohammad Ibn Zakariya al-Razi (Rhazes) (c. 865-925)". sciencemuseum.org.uk. Archived from the original on 2015-05-06. Retrieved May 31, 2015. - "Rhazes Diagnostic Differentiation of Smallpox and Measles". ircmj.com. Archived from the original on August 15, 2015. Retrieved May 31, 2015. - Cosman, Madeleine Pelner; Jones, Linda Gale (2008). Handbook to Life in the Medieval World. Handbook to Life Series. 2. Infobase Publishing. pp. 528–30. ISBN 978-0-8160-4887-8. - Cyril Elgood, A Medical History of Persia and the Eastern Caliphate, (Cambridge University Press, 1951), p. 3. - K. Mangathayaru (2013). Pharmacognosy: An Indian perspective. Pearson education. p. 54. ISBN 978-93-325-2026-4. - Lock, Stephen (2001). The Oxford Illustrated Companion to Medicine. Oxford University Press. p. 607. ISBN 978-0-19-262950-0. - A.C. Brown, Jonathan (2014). Misquoting Muhammad: The Challenge and Choices of Interpreting the Prophet's Legacy. Oneworld Publications. p. 12. ISBN 978-1-78074-420-9. - Ahmad, Z. (St Thomas' Hospital) (2007), "Al-Zahrawi – The Father of Surgery", ANZ Journal of Surgery, 77 (Suppl. 1): A83, doi:10.1111/j.1445-2197.2007.04130_8.x - Ignjatovic M: Overview of the history of thyroid surgery. Acta Chir Iugosl 2003; 50: 9–36. - "History of the caravel". Nautarch.tamu.edu. Archived from the original on 2015-05-03. Retrieved 2011-04-13. - "Islam in China". bbcnews.com. 2 October 2002. Archived from the original on 2016-01-06. Retrieved 13 July 2016. - Haviland, Charles (2007-09-30). "The roar of Rumi – 800 years on". BBC News. Archived from the original on 2012-07-30. Retrieved 2011-08-10. - "Islam: Jalaluddin Rumi". BBC. 2009-09-01. Archived from the original on 2011-01-23. Retrieved 2011-08-10. - John Stothoff Badeau and John Richard Hayes, The Genius of Arab civilization: source of Renaissance. Taylor & Francis. 1983-01-01. p. 104. ISBN 978-0-262-08136-8. Retrieved 2014-04-11. - "Great Mosque of Kairouan (Qantara mediterranean heritage)". Qantara-med.org. Archived from the original on 2015-02-09. Retrieved 2014-04-11. - Cooper, William W.; Yue, Piyu (2008). William Wager Cooper and Piyu Yue (2008), Challenges of the Muslim world: present, future and past, Emerald Group Publishing, page 215. ISBN 978-0-444-53243-5. Retrieved 2014-04-11. - El-Rouhayeb, Khaled (2015). Islamic Intellectual History in the Seventeenth Century: Scholarly Currents in the Ottoman Empire and the Maghreb. Cambridge: Cambridge University Press. pp. 1–10. ISBN 978-1-107-04296-4. - "Religion and the Rise and Fall of Islamic Science". scholar.harvard.edu. Archived from the original on 2015-12-22. Retrieved 2015-12-20. - El-Rouayheb, Khaled (2008). "The Myth of "The Triumph of Fanaticism" in the Seventeenth-Century Ottoman Empire". Die Welt des Islams. 48: 196–221. - El-Rouayheb, Khaled (2006). "Opening the Gate of Verification: The Forgotten Arab-Islamic Florescence of the 17th Century". International Journal of Middle East Studies. 38: 263–81. - "Mokyr, J.: A Culture of Growth: The Origins of the Modern Economy. (eBook and Hardcover)". press.princeton.edu. p. 67. Archived from the original on 2017-03-24. Retrieved 2017-03-09. - "The Fountain Magazine – Issue – Did al-Ghazali Kill the Science in Islam?". www.fountainmagazine.com. Archived from the original on 2015-04-30. Retrieved 2018-03-08. - Gates, Warren E. (1967). "The Spread of Ibn Khaldûn's Ideas on Climate and Culture". Journal of the History of Ideas. 28 (3): 415–22. Bibcode:1961JHI....22..215C. doi:10.2307/2708627. JSTOR 2708627. - Dhaouadi, M. (1 September 1990). "Ibn Khaldun: The Founding Father of Eastern Sociology". International Sociology. 5 (3): 319–35. doi:10.1177/026858090005003007. - Haddad, L. (1 May 1977). "A Fourteenth-Century Theory of Economic Growth and Development". Kyklos. 30 (2): 195–213. doi:10.1111/j.1467-6435.1977.tb02006.x. - George Makdisi "Scholasticism and Humanism in Classical Islam and the Christian West". Journal of the American Oriental Society 109, no.2 (1982) - Josef W. Meri (2005). Medieval Islamic Civilization: An Encyclopedia. Routledge. ISBN 0-415-96690-6. p. 1088. - Tamara Sonn: Islam: A Brief History. Wiley 2011, ISBN 978-1-4443-5898-8, pp. 39–79 (online copy, p. 39, at Google Books) - Maurice Lombard: The Golden Age of Islam. American Elsevier 1975 - George Nicholas Atiyeh; John Richard Hayes (1992). The Genius of Arab Civilization. New York University Press. ISBN 0-8147-3485-5, 978-0-8147-3485-8. p. 306. - Falagas, M. E.; Zarkadoulia, Effie A.; Samonis, George (1 August 2006). "Arab science in the golden age (750–1258 C.E.) and today". The FASEB Journal. 20 (10): 1581–86. doi:10.1096/fj.06-0803ufm. PMID 16873881. - Starr, S. Frederick (2015). Lost Enlightenment: Central Asia's Golden Age from the Arab Conquest to Tamerlane. Princeton University. ISBN 978-0-691-16585-1. - Allsen, Thomas T. (2004). Culture and Conquest in Mongol Eurasia. Cambridge University Press. ISBN 978-0-521-60270-9. - Dario Fernandez-Morera (2015) The Myth of the Andalusian Paradise. Muslims, Christians, and Jews under Islamic Rule in Medieval Spain. ISI Books ISBN 978-1-61017-095-6 (hardback) |Wikimedia Commons has media related to Islamic Golden Age.| - Islamicweb.com: History of the Golden Age - Khamush.com: Baghdad: Metropolis of the Abbasid Caliphate – Chapter 5, by Gaston Wiet. - U.S. Library of Congress.gov: The Kirkor Minassian Collection — contains examples of Islamic book bindings.
US History/Eisenhower Civil Rights Fifties Civil Rights Movement under Eisenhower and DesegregationEdit The first events that would spark off the entire Civil Rights movement happened during the Eisenhower administration. In the south, there were many statewide laws that segregated many public facilities ranging from buses to water fountains. Southern African Americans now felt that their time had come to enjoy American democracy and they fought hard to end southern segregation policies. Brown v. Board of EducationEdit In 1952, seven year old Linda Brown, of Topeka, Kansas, wasn't permitted to attend a white-only elementary school that was only a few blocks from her house. In order to attend her coloreds-only school, Brown had to cross dangerous railroad tracks and take a bus for many miles. Her family sued the Topeka school board and lost, but appealed the case all the way to the Supreme Court. Brown v. Board of Education of Topeka, Kansas came to the Supreme Court in December 1952. In his arguments, head lawyer for the NAACP, Thurgood Marshall, challenged the "Separate But Equal" doctrine established in Plessy v. Ferguson in 1896. He argued that schools could be separate, but never equal. On May 17, 1954, the Court gave its opinion. It ruled that it was unconstitutional to segregate schools, and ordered that schools integrate "with all deliberate speed." Central High ConfrontationEdit Integration would not be easy. Many school districts accepted the order without argument, but some, like the district of Little Rock, Arkansas, did not. On September 2, 1957, the day before the start of the school term the Arkansas Governor, Orval Faubus, instructed the National Guard to stop any black students entering the school. He claimed this was to protect the property against violence planned by integration protesters. The federal authorities intervened and an injunction was granted preventing the National Guard from blocking the school and they were withdrawn on September 20. School restarted on 23 September, with the building surrounded by local police officers and nearly one thousand protesters. The police escorted nine black students, later known as the Little Rock Nine, into the school via a side door. When the crowd discovered the students had entered the building, they tried to storm the school and the black students were hurried out around lunch time. Congressman Brooks Hays and the Little Rock mayor, Woodrow Mann, asked the federal government for more help. On September 24, Mann sent a message to President Eisenhower requesting troops. Eisenhower responded immediately and the 101st Airborne Division was sent to Arkansas. In addition, the President brought the Arkansas National Guard under federal control to prevent its further use by the Governor. On September 25, 1957, the nine black students finally began their education properly, protected by 1,000 paratroopers. Montgomery Bus BoycottEdit On December 1, 1955, Rosa Parks, a seamstress and secretary of the Montgomery, Alabama, the chapter of the NAACP, boarded a city bus with the intention of going home. She sat in the first row of seats in the "colored" section of the segregated bus. At the next stop, whites were among the passengers waiting to board but all seats in the "white" front dividing the black and white sections to accommodate the racial makeup of the passengers at any given moment. So he ordered the four blacks sitting in the first row of seats in the "colored" section to stand and move to the rear of the bus so the waiting whites could have those seats. Three of the passengers complied; Mrs. Parks did not. Warned again by the driver, she still refused to move, at which point the driver exited the bus and located a policeman, who came onto the bus, arrested Mrs. Parks, and took her to the city jail. She was booked for violating the segregation ordinance and was shortly released on bail posted by E. D. Nixon, the leading local civil rights activist. She was scheduled to appear in municipal court on December 5, 1955. Mistreatment of African Americans on Montgomery's segregated buses was not uncommon, and several other women had been arrested in similar situations in the months preceding Parks's. However, Mrs. Parks was especially well-known and well-respected within the black community, and her arrest particularly angered the African Americans of Montgomery. In protest, community leaders quickly organized a one-day boycott of the buses to coincide with her December 5 court date. An organization, the Montgomery Improvement Association, was also created, and the new minister of the Dexter Avenue Baptist Church, the 26-year-old Martin Luther King, Jr., was selected as the MIA's president. Word of the boycott spread effectively through the city over the weekend of December 3–4, aided by mimeographed fliers prepared the Women's Political Council, by announcements in black churches that Sunday morning, and by an article in the local newspaper about the pending boycott, which had been "leaked" to a reporter by E. D. Nixon. On the morning of Mrs. Parks's trial, King, Nixon, and other leaders were pleasantly surprised to see that the boycott was almost 100 percent effective among blacks. And since African Americans made up 75% of Montgomery's bus riders, the impact was significant. In city court, Mrs. Parks was convicted and was fined $10. Her attorney, the 24-year-old Fred D. Gray, announced an appeal. That night, more than 5,000 blacks crowded into and around the Holt Street Baptist Church for a "mass meeting" to discuss the situation. For most in the church (and listening outside over loudspeakers), it was their first time to hear the oratory of Martin Luther King, Jr. He asked the crowd if they wanted to continue the boycott indefinitely, and the answer was a resounding yes. For the next 381 days, African Americans boycotted the buses, while the loss of their fares drove the Chicago-owned bus company into deeper and deeper losses. However, segregationist city officials prohibited the bus company from altering its seating policies, and negotiations between black leaders and city officials went nowhere. With bikes, carpools, and hitchhiking, African Americans were able to minimize the impact of the boycott on their daily lives. Meanwhile, whites in Montgomery responded with continued intransigence and rising anger. Several black churches and the homes of local leaders and ministers, including those of Nixon and King, were bombed, and there were numerous assaults by white thugs on African Americans. Some 88 local black leaders were also arrested for violating an old anti-boycott law. Faced with the lack of success of negotiations, attorney Gray soon filed a separate lawsuit in federal court challenging the constitutionality of the segregated seating laws. The case was assigned to and testimony was heard by a three-judge panel, and the young Frank M. Johnson, Jr., newly appointed to the federal bench by Republican President Dwight D. Eisenhower, was given the responsibility for writing the opinion in the case. Johnson essentially ruled that in light of the 1954 Brown v. Board of Education decision by the U.S. Supreme Court, there was no way to justify legally the segregation policies, and the district court ruling overturned the local segregation ordinance under which Mrs. Parks and others had been arrested. The city appealed, but the U.S. Supreme Court affirmed the lower court ruling, and in December 1956, city officials had no choice but to comply. The year-long boycott thus came to an end. The Montgomery Bus Boycott made Mrs. Parks famous and it launched the civil rights careers of King and his friend and fellow local minister, Ralph Abernathy. The successful boycott is regarded by many historians as the effective beginning of the twentieth-century civil rights movement in the U.S. In addition to his desire to halt the advance of “creeping socialism” in U.S. domestic policy, Eisenhower also wanted to “roll back” the advances of Communism abroad. After taking office in 1953, he devised a new foreign policy tactic to contain the Soviet Union and even win back territory that had already been lost. Devised primarily by Secretary of State John Foster Dulles, this so-called New Look at foreign policy proposed the use of nuclear weapons and new technology rather than ground troops and conventional bombs, all in an effort to threaten “massive retaliation” against the USSR for Communist advances abroad. In addition to intimidating the Soviet Union, this emphasis on new and cheaper weapons would also drastically reduce military spending, which had escalated rapidly during the Truman years. As a result, Eisenhower managed to stabilize defense spending, keeping it at roughly half the congressional budget during most of his eight years in office. The doctrine of massive retaliation proved to be dangerously flawed, however, because it effectively left Eisenhower without any options other than nuclear war to combat Soviet aggression. This dilemma surfaced in 1956, for instance, when the Soviet Union brutally crushed a popular democratic uprising in Hungary. Despite Hungary’s request for American recognition and military assistance, Eisenhower’s hands were tied because he knew that the USSR would stop at nothing to maintain control of Eastern Europe. He could not risk turning the Cold War into a nuclear war over the interests of a small nation such as Hungary. The Warsaw Pact and NATOEdit 1955 saw the division of Europe into two rival camps. The westernized countries of the free world had signed NATO 1949 and the eastern European countries signed the Warsaw pact. The North Atlantic Treaty Organization was created as a response to the crisis in Berlin. The United States, Britain, Canada, France, Portugal, Italy, Belgium, Luxembourg, Norway, Denmark, Iceland, and the Netherlands founded NATO in April 1949, and Greece, Turkey and West Germany had joined by 1955. The countries agreed that "an armed attack against one or more of [the member states] in Europe or North America shall be considered an attack against them all," and was created so that if the Soviet Union eventually did invade Europe, the invaded countries would have the most powerful army in the world (the United States' Army) come to their defense. When the Korean War broke out, NATO drastically raised its threat level because of the idea that all the communist countries were working together. As the number of communist countries grew and grew, so did the NATO forces. Greece and Turkey eventually joined NATO in 1952. The USSR eventually decided to join NATO so that there would be peace, but NATO declined them because they thought that the USSR would try to weaken them from the inside. The Warsaw PactEdit The Soviet Union responded in to the addition of West Germany to NATO 1955 with its own set of treaties, which were collectively known as the Warsaw Pact. Warsaw Pact was also known as “The Treaty of Friendship”. The Warsaw Pact allowed East Germany, Poland, Czechoslovakia, Hungary, Albania, Romania, and Bulgaria to function in the same way as the NATO countries did. The Soviet Union used this Warsaw Pact to combine the military forces unified under it. The Pact was supposed to make all the countries in it, equal. However, the Soviet Union took a little advantage of this by using the allied countries military wherever they wanted. Unlike NATO, Warsaw forces were used occasionally. As an alternative, Eisenhower employed the CIA to tackle the specter of Communism in developing countries outside the Soviet Union’s immediate sphere of influence. Newly appointed CIA director Allen Dulles (the secretary of state’s brother) took enormous liberties in conducting a variety of covert operations. Thousands of CIA operatives were assigned to Africa, Asia, Latin America, and the Middle East and attempted to launch coups, assassinate heads of state, arm anti-Communist revolutionaries, spread propaganda, and support despotic pro-American regimes. Eisenhower began to favor using the CIA instead of the military because covert operations didn’t attract as much attention and cost much less money. A CIA-sponsored coup in Iran in 1953, however, did attract attention and heavy criticism from liberals both at home and in the international community. Eisenhower and the Dulles brothers authorized the coup in Iran when the Iranian government seized control of the British-owned Anglo-Iranian Oil Company. Afraid that the popular, nationalist, Soviet-friendly prime minister of Iran, Mohammed Mossadegh, would then cut off oil exports to the United States, CIA operatives convinced military leaders to overthrow Mossadegh and restore Mohammed Reza Shah Pahlavi as head of state in 1953. Pahlavi returned control of Anglo-Iranian Oil to the British and then signed agreements to supply the United States with almost half of all the oil drilled in Iran. The following year, a similar coup in Guatemala over agricultural land rights also drew international criticism and severely damaged U.S.–Latin American relations. All men are created equal. They are endowed by their Creator with certain inalienable rights, among them are Life, Liberty, and the pursuit of Happiness.—Hồ Chí Minh, Declaration of independence of the Democratic Republic of Vietnam in 1945. In 1945 many colonies, including French Indochina, hoped for independence following the War. When Japan surrendered to the allies a Vietnamese man, Ho Chi Minh, declared independence for Vietnam in Hanoi, quoting America's own Declaration of Independence in the very first lines of his speech in hopes of gaining American support for a Vietnam free from French rule. However, with containing communism in Europe being seen as a more important issue, America took a stance of neutrality from 1946 to 1950. In the early 50's, Vietnam was rebelling against French rule. America saw Vietnam as a potential source of trouble, as rebels (known as the Việt Minh) led by Communist leader Ho Chi Minh were gaining strength. America loaned France billions of dollars to aid in the war against the Vietnamese rebels, but despite the aid, France found itself on the verge of defeat, and appealed to America for troops, but America refused, fearing entanglement in another costly Korean War, or even a war with all of communist Asia. France surrendered, and the VietMinh and France met in Geneva, Switzerland to negotiate a treaty. Vietnam was divided into two countries: the Vietminh in control of the North and the French-friendly Vietnamese in control of the South. In 1956, the two countries would be reunited with free elections. Eisenhower worried about South Vietnam. He believed that if it also fell to the Communists, many other Southeast Asian countries would follow, in what he called the domino theory. He aided the Southern government and set up the Southeast Asia Treaty Organization (SEATO) in 1954. The nations included in the alliance were America, Great Britain, France, Australia, Pakistan, the Philippines, New Zealand and Thailand, and they all pledged to fight against "Communist aggressors". In 1958 and 1959, anti-American feeling became a part of the growing Cuban revolution. In January 1959, the dictator of Cuba, Fulgenicio Batista, was overthrown by the rebel leader Fidel Castro, who promptly became the leader of Cuba. At first, America supported Castro because of his promises of democratic and economic reforms. But relations between the two countries became strained when Cuba began seizing foreign-owned land (which was mostly U.S. owned) as a part of its reforms. Soon, Castro's government was a dictatorship, and was being backed by the Soviet Union. In 1961, Eisenhower cut diplomatic ties with Cuba, and relations with the island nation have been difficult ever since. Back in 1948, Israel was created as a sanctuary of sorts for the displaced Jews of the Holocaust. At the same time, many Arabs living in the area were displaced. Tensions had been high in the Middle East ever since Israel had been attacked just after its founding. The stage was set for superpower involvement in 1956; the United States backed Israel, the Soviet Union backed the Arabs, and the Egyptian president Gamal Abdel Nasser had nationalized, or brought under Egypt's control, the Suez Canal, which had previously belonged to Britain. France and Britain worried that Egypt would decide cut off oil shipments between the oil-rich Middle East and western Europe, so that October they invaded Egypt, hoping to overthrow Nasser and seize the canal. Israel, upset by earlier attacks by Arab states, agreed to help in the invasion. U.S. and Soviet reactions to the invasions were almost immediate. The Soviets threatened to launch rocket attacks on British and French cities, and the United States sponsored a United Nations resolution for British and French withdrawal. Facing pressures from the two powers, the three invaders pulled out of Egypt. To ensure stability in the area, United Nations troops were sent to patrol the Egypt-Israel border. The Space Race has its origins in an arms race between the United States and the Soviet Union. After the Soviet Union tested its first atomic bomb in September 1949, the fear of nuclear war began to spread. The ensuing arms race led to the creation of intercontinental ballistic missiles (ICBMs), long-range rockets designed to deliver nuclear warheads from land- or submarine-based launch sites. The first successful ICBM test flight was on August 21, 1957, when the USSR launched an ICBM with a dummy payload over 4,000 miles to an isolated peninsula on Russia’s east coast that had been declared a military zone (Kamchatka Peninsula, which remained closed to civilians from 1945–1989). On October 4, 1957, the Soviets successfully put the first man-made satellite, Sputnik, into orbit. Americans were horrified. They feared that the Soviets were using the satellite to spy on Americans, or even worse, that the Soviet Union might attack America with nuclear weapons from space. America responded with the launch of its own satellite, Vanguard. Hundreds of spectators gathered, only to watch the satellite rise only a few feet off the launch pad, and then explode. The failure spurred the government to create a space agency, the National Aeronautics and Space Administration (NASA). NASA succeeded in launching the Explorer in 1958, and thus, the Space Race was initiated. With the creation of Project Mercury, a program to put an astronaut in space, America was pulling ahead. Nonetheless, the USSR was the first to put a man in space, when Yuri Gagarin was launched into orbit in 1961. For the next 14 years, the U.S and the Soviet Union would continue to compete in space. Many Americans were frightened during the start of the space race because this gave the Soviet Union a better ability to launch a surprise attack on the states. The United States of course immediately jumped on the bandwagon and did its best to become the first country to land on the moon. Alaska and Hawaii were admitted as states during the Eisenhower administration. Eisenhower established the Interstate Highway System in 1956 to bolster the economy, and to facilitate rapid mobilization of defense forces in the event of an emergency. At the end of his administration, Eisenhower warned of the dangers of a Military Industrial Complex, fearing undemocratic influence from a rapidly growing defense sector. Everyday life in the 1950'sEdit With the rise of television ownership came the rise of television shows. Emerging genres like Sitcoms (Situational comodies), talk shows, and game shows enjoyed widespread popularity. Easy to cook, ready made meals called TV dinners were introduced during this time The construction of the interstate system firmly established a car culture in America. Fast Food had existed prior to the 1950's, but the 1950's were when it really became a phenomenon, when franchising locations of popular restaurants became common, eventually leading to the formation of chains with uniform quality and items nationwide. In the 1950's Pizza became an common staple outside of the Italian-American community, spurred by American troops returning from Italy, references to pizza in popular culture, and the establishment of chain pizza restaurants. Rise of the Middle ClassEdit In the late 1940s and 1950’s led to a rise of the middle class in the United States. With troops coming home from the war soldiers were quick to start families. The GI bill allowed for upward mobility for veterans. With free college and over four billion dollars given to trips the economy continued to succeed. With war savings and a host of new consumer goods on the market, America quickly turned into a consumer market. The best example of this would be automobiles and the television. In 1955, $65 billion was spent on automobiles. This represented 20% of the Gross National Product. In 1950 50% of American homes had a television. By 1960 this number was raised to 90%. Rock and RollEdit The term “rock and roll” was originally a nautical phrase referring to the motion of a ship at sea. In the early 20th century, it gained a religious connotation (referring to the sense of rapture felt by worshippers) and was used in spirituals. After this, “rocking and rolling” increasingly became used as a metaphor for sex in blues and jazz songs. The origins of rock and roll lie primarily in electric blues from Chicago in the late 1940s, which was distinguished by amplification of the guitar, bass, and drums. Electric blues was played by artists like Muddy Waters, Willie Dixon, and Buddy Guy, who were recorded by Leonard and Phil Chess at Chess Records in Chicago. They inspired electric blues artists in Memphis like Howlin’ Wolf and B.B. King, who were recorded by a Memphis-based record producer named Sam Phillips, also the owner of Sun Records. He later discovered Elvis Presley in 1954, and he also recorded early songs by Jerry Lee Lewis, Johnny Cash, Roy Orbison, and Carl Perkins. Rhythm and blues artists such as Little Richard, Chuck Berry, Bo Diddley, and Ray Charles incorporated electric blues as well as gospel music. In the early 50s, R&B was more commonly known by the blanket term “race music”, which was also used to describe other African-American music of the era such as jazz and blues. Billboard didn’t replace the “race records” category with “rhythm and blues” until 1958. Doo-wop was a mainstream style of R&B, with arrangements favoring vocal harmonies. Other types of music that contributed to early rock and roll include African-American spirituals, also known as gospel music, and country/folk, which was primarily made by poor whites in the South. Arguably, the first rock and roll song ever made is “Rocket 88”, recorded at Sam Phillip’s studio in Memphis in March 1951. It was credited to “Jackie Brenston and his Delta Cats,” a band that didn’t actually exist—the song was put together by Ike Turner and his band, the Kings of Rhythm. Jackie Brenston was the vocalist on the song, who also played saxophone in the band. What really sets “Rocket 88” apart is the distorted guitar sound: it was one of the first examples of fuzz guitar ever recorded. The amplifier they used to record the song was damaged on the way from Mississippi to Memphis. They tried to hold the cone in place by stuffing the amplifier with newspaper, which created the distortion. Sam Phillips liked the sound and decided to keep it in the song. Although “Rocket 88” was recorded by Sam Phillips, it wasn’t released by Sun Records, which didn’t exist until 1952. From 1950-1952, Phillips ran the Memphis Recording Service, where he would let amateurs perform and then sell the recordings to large record labels. He sold “Rocket 88” to Chess Records, which released predominately blues, gospel, and R&B. The Chess brothers started Checker Records in 1952, because radio stations would only play a certain number of tracks from each label. Alan Freed (also known as “Moondog”) was a radio DJ who started playing R&B records on WJW in Cleveland in 1951. He is credited with introducing rock and roll to a wide audience for the first time, as well as being the first to use the phrase “rock and roll” as the name of the genre. He also promoted and helped organize the first major rock and roll concert, The Moondog Coronation Ball, which occurred on March 21, 1952. The concert was so successful that it became massively overcrowded – there was a near-riot and it had to be shut down early. Freed’s popularity soared, and he was immediately given more airtime by the radio station. His promotion of rock ‘n’ roll is one of the main reasons it became successful, and in recognition of his contributions to the genre, the Rock and Roll Hall of Fame was built in Cleveland. “Payola” refers to the practice of record company promoters paying radio DJs to play their recordings in order to boost their sales. Payola had been commonplace since the Vaudeville era in the 1920s, but it became a scandal in the 1950s due to a conflict between the American Society of Composers, Authors, and Publishers (ASCAP) and radio stations. Prior to 1940, ASCAP had made huge amounts of money from the sales of sheet music, but when radio started gaining popularity, recorded music became more profitable than sheet music. ASCAP demanded large royalty payments from radio stations that played their recordings. Instead, stations boycotted ASCAP recordings and created their own publishing company called Broadcast Music Incorporated (BMI). ASCAP tended to ignore music composed by black musicians or “hillbillies”, which gave BMI control of these areas. When rock ‘n’ roll became more and more popular, BMI became more and more successful. ASCAP (in addition to many others) believed that rock ‘n’ roll was the music of the devil, that it was brainwashing teenagers, and that it would never have been successful without payola. This was just after the quiz show scandal (when it was found that certain shows were rigged), and ASCAP urged the House Legislative Committee which had investigated that scandal to look into payola. The hearings that followed destroyed Alan Freed’s career, although it didn’t eliminate rock ‘n’ roll altogether as ASCAP had hoped. Several factors contributed to the decline of early rock and roll. Chuck Berry and Jerry Lee Lewis were both prosecuted in scandals involving young women. Elvis Presley was inducted into the U.S. Army in 1958, and after training at Fort Hood, he joined the 3rd Armored Division in Germany, where he would remain until 1960. Three rock and roll musicians –- Buddy Holly, Ritchie Valens, and “The Big Bopper” –- died in a plane crash on February 3, 1959 (“The Day the Music Died”). Little Richard retired from secular music after a religious experience. He ran a ministry in Los Angeles, preached across the country, and recorded gospel music exclusively until 1962. Rock and roll music is associated with the emergence of a teen subculture among baby boomers. Teenagers bought records and were exposed to rock and roll via radio, jukeboxes, and television shows like American Bandstand, which featured teenagers dancing to popular music. It also affected movies, fashion trends, and language. The combination of white and black music in rock and roll –- at a time when racial tensions were high and the civil rights movement was in full-swing –- provoked strong reactions among the older generation, many of whom worried that rock and roll would contribute to social delinquency among teenagers. However, it actually encouraged racial cooperation and understanding to some extent—rock and roll was a combination of diverse styles of music made by different races, and it was enjoyed by both African-American and Caucasian teens. - "The OSS and Ho Chi Minh". https://kansaspress.ku.edu/978-0-7006-1652-7.html. Retrieved 18 September 2020. - "Declaration of Independence of the Democratic Republic of Vietnam". http://historymatters.gmu.edu/d/5139/. Retrieved 18 September 2020. - "Operational Priority Communication from Strategic Services Officer Archimedes Patti, September 2, 1945" (in en). 25 January 2019. https://iowaculture.gov/history/education/educator-resources/primary-source-sets/cold-war-vietnam/operational-priority. Retrieved 18 September 2020. - "Episodes 1-4" (in en). 11 July 2017. https://www.archives.gov/exhibits/remembering-vietnam-online-exhibit-episodes-1-4. Retrieved 18 September 2020. - Calta, Marialisa (14 July 1996). "Gimme Shelter". The New York Times. https://www.nytimes.com/1996/07/14/travel/gimme-shelter.html. Retrieved 19 September 2020. - "Ike's Warning Of Military Expansion, 50 Years Later" (in en). NPR.org. https://www.npr.org/2011/01/17/132942244/ikes-warning-of-military-expansion-50-years-later. Retrieved 19 September 2020. - "Who "invented" the TV dinner?". https://www.loc.gov/everyday-mysteries/item/who-invented-the-tv-dinner/. Retrieved 18 September 2020. - "plaza.ufl.edu". http://plaza.ufl.edu/theiliad/pizza.html. Retrieved 18 September 2020. - Baker, Sarah. "When the Moon Hits Your Eye… Natural Selections". https://selections.rockefeller.edu/when-the-moon-hits-your-eye/. Retrieved 18 September 2020.
In Swift, each variable/constant holds a value that represents some data type. There are lots of data types in Swift. As a beginner, it is important to learn these 5 basic data types first: - Int to store whole numbers. - String to store words. - Double and Float to store decimal numbers. - Bool to store the truth values (true or false) Here are some examples of variables and constants that represent different data types: var x: Int = 10 let pi: Double = 3.141592 var name: String = "Alice" var isCold: Bool = false In this article, we are going to take a deeper dive into the basic data types in Swift. You start by learning what is a data type in the first place, and how they are also present in the real world. Then you are going to see how data types are present in the Swift programming language. More importantly, you are going to learn how Swift knows the data type of a variable and how you can specify it as well. If you have not read about variables and constants, please, read this article before moving on. Disclaimer: To learn Swift, you have to repeat everything you see in this guide and in the following guides. If you only read the guides, you are not going to learn anything! Data Types in Swift There is a lot of data in the world. Each piece of data represents some data type. - A decimal number describes a fractional number. For example 3.141. - A whole number describes a full number without decimal places, such as 100. - A word is a group of letters, such as Hello. - The truth can be either true or false. These are just some examples of data types in the real world. As you may imagine, a computer also needs to know the type of data it is dealing with. Thus, it is not a surprise that data types are present in the Swift programming language. But why are data types important? Based on the data type, values are stored differently on your device. In Swift programming, data types have more formal names than real-life data types like “word” or “whole number”. Here are the 5 most important data types in Swift: - A “decimal number” is called a Double or a Float. - A “whole number” is called an Int (short for Integer) - A “word” is called a String, which is a group of characters. - The “truth” is called Boolean. Notice that these are not the only data types in Swift. However, these are the most important ones you need to learn first as a beginner. For demonstration, let’s create variables that represent these data types. Here is an example of a Double: var x = 2.32 Here is a Float, which is essentially the same as a Double: var y = 1.32 Here is an integer or Int: let number = 10 Here is a String that represents a word: let name = "Alice" Last but not least, here is a Bool that represents the truth: let isCold = false But as you can see, the data type is not actually mentioned in any one of these examples. In the next chapter, you are going to learn why and how it can be changed. How to Specify Data Type in Swift In Swift, every variable or constant you ever create is going to have a data type. In the previous tutorial, you learned how to create variables and constants. Let’s bring back an example where you create a variable x and set its value at 10. var x = 10 The data type of variable x is an Int because it represents a whole number. But as you can see, it is not explicitly stated anywhere. This is because Swift is able to figure out the data type automatically. However, you can (and in some situations should) explicitly specify the data type to the variable in question. To specify the data type of a variable or constant in Swift, use a colon after the variable name followed by the data type. For example, let’s specify the data type of the previously mentioned variable x to be an integer (Int): var x: Int = 10 This line of code is exactly the same as the one where the data type was not explicitly specified. But why would you do this if Swift knows the data type automatically? Swift is a type-safe language. In other words, Swift makes it easy for you to be aware of the data types being used. So even though specifying the data type is not mandatory, it is still recommended. Also, in some situations, you are forced to specify the data type of the variable in question. You are going to see an example later in this course. Specifying the data type also improves the code readability. As you gain more experience and write more complex programs, being able to instantly read the data type saves time. Next, you are going to learn why changing the data type of a variable or constant is not possible in Swift. Can You Change the Data Type of a Variable? In Swift, you cannot change the data type of a variable. For example, let’s create an integer and try to assign a string to it: var x: Int = 10 x = "test" This causes an error: Cannot assign value of type 'String' to type 'Int' Not being able to change the data type is because different data types are stored differently in memory. If you declare a variable to store integers, it can only store integers. You can change its integer value. But you cannot change it to a string for example. This may sound a bit limiting to you. But it has a great advantage. When a variable is locked to store only some type of data, you know what to expect from it. If the data could be anything, you’d have no idea what type of data the variable holds. This, in turn, makes the code more susceptible to errors and harder to read. At this point, you understand the significance of data types in Swift. Furthermore, you understand how to explicitly specify a data type for a variable (or constant). Next, let’s take a look at the most common data types in Swift. Common Data Types in Swift You already saw some examples of how to create variables and constants that represent different data types. In this chapter, you learn more about the 5 most common data types in Swift, that is: In Swift, String is one of the most commonly used data types. A String represents textual data. The name “string” refers to a string of characters, which is essentially the definition of text. Here are some examples of strings in Swift: var name = "Alice" let DOB = "1993-08-12" var favoriteColor: String = "Yellow" Feel free to copy-paste these strings to your Xcode Playground and change their values and names. In this example: - The first line creates a variable that represents the name of a person. Because the type is not specified explicitly, Swift automatically determines it is a String. - The second line creates a constant that represents a string of someone’s birthday. Here Swift also automatically determines the data type to be a String. - The third line creates a string variable that represents someone’s favorite color. Here the type is explicitly set to be a String. Next, let’s talk about integers. In Swift, a whole number is called an integer. A data type that represents integers is called Int, which is a shorthand for the word integer. Here is an example of creating a bunch of integers to your program: let n = 10 let myAge: Int = 26 var yourAge: Int = 30 Feel free to change the names and values of these variables and constants in your Playground file. Float and Double In Swift, decimal numbers are represented by two possible data types: Float and Double. The difference between these two types is that Double is more precise than a Float. Thus, you can stick with Double when dealing with decimal numbers in Swift. Here are some examples: let pi: Double = 3.141592 var distance = 329.731 var volume = 100.0 If you do not specify the data type for a decimal number explicitly, Swift automatically makes it a Double. Next, let’s talk about representing the truth values in Swift. In Swift, the truth value is called a boolean value. The truth value is represented with a data type called Bool, which is a shorthand for Boolean. A boolean value can be either true or false. Boolean values are an important part of programming because they are used to implement logic in your program. For instance, here are a bunch of boolean values in Swift: var isRunning = false var isCold = true let canTrespass: Bool = false This completes the guide on the basic data types in Swift. Next Chapter: Operators in Swift In this chapter, you learned what is a data type, and what are the most common data types in Swift. To recap, every piece of data represents some data type. In real life, you encounter data types such as text, numbers, decimals, and so on. These same data types are present in the Swift programming language. There are lots of data types in Swift, but the most common ones are: - String. Used to represent text. - Int. Used to represent whole numbers. - Double and Float. Used to represent decimal numbers. - Boolean. Used to represent truth values, that is, true or false. You are going to use these data types a lot! Next up, you are going to learn operators in Swift. These operators can be used to perform actions on different types of data. Next Chapter: Operators in Swift About the Author - I'm an entrepreneur and a blogger from Finland. My goal is to make coding and tech easier for you with comprehensive guides and reviews. - Artificial Intelligence2023.05.16Humata AI Review (2023): Best PDF Analyzer (to Understand Files) - Python2023.05.139 Tips to Use ChatGPT to Write Code (in 2023) - Artificial Intelligence2023.04.1114 Best AI Website Builders of 2023 (Free & Paid Solutions)
Polynomial Multiplication Worksheets Polynomials are algebraic expressions that consist of several terms. Each term often contains different powers of the same variable(s). The name often haunts students, but the problems associated with the, especially operations, are no more difficult than working with larger numbers in operations. It all begins and ends with the ability of the student to stay organized through the process and being able to spot like terms. This particular operation is used often in the construction industry to determine the desired dimensions of a structure that will be fabricated. It is also essential when constructing and engineering any type of curved structure. These worksheets and lessons help students learn how to multiply polynomials. Aligned Standard: HSA-APR.A.1 - Polynomial Product Step-by-step Lesson-We work through your first polynomial product and help you solve them as a series. - Guided Lesson - These are three problems on polynomials that are very common to see. You might even recognize the common pattern that is used for these. - Guided Lesson Explanation - Work with the concept of (FOIL) First-Outside-Inside-Last for solving these. - Practice Worksheet - I threw a couple in there with three terms to give you a bit of a challenge. - Matching Worksheet - Match the polynomial products to their final outcome. - Multiplying Polynomials Five Pack - A rapid fire splash of problems for you to practice with. These problems are the standard level you will see. - Answer Keys - These are for all the unlocked materials above. Once again, get the kids in the habit of treating polynomials like a three or more part equation. - Homework 1 - In algebra when we use the distributive property, we are expanding or distributing. - Homework 2 - Now, multiply the third bracket with the product of first and the second bracket. - Homework 3 - We will multiply both the parentheses. Remind students that spacing these problems is critical when solving them. - Practice 1 - Find the product of the polynomials. - Practice 2 - (5x + 2) (5x - 5) - Practice 3 - What is the final value left behind? Math Skill Quizzes I threw a couple of curve balls in here to keep kids on their toes. - Quiz 1 - Find the products. - Quiz 2 - We add exponents in this quiz. - Quiz 3 - The zero value always throws them off. How to Multiply Polynomials 2 x 2 + 3y +4z We all have seen an expression or a mathematical statement like the one above. Such combinations of variables (letters like x, y, z), constants (numbers like 2, 3, 4) and exponents (like 2 in x2) are known as polynomials. Polynomials contain operators like: multiplication, addition, subtraction, and positive exponents. But they don't have division operators or negative exponents. Here, we are going to focus on the multiplication operator with polynomials. There are a couple of things you need to keep in mind when multiplying polynomials with each other: Step 1 - Multiply each term in one polynomial by each term in the other polynomial. Step 2 - Combine those terms and simplify if needed. Let's begin with the easiest of the bunch and work our way up the spectrum. Monomial With Monomial (1 Term Times 1 Term) - To multiply a monomial with monomial, we first multiply the coefficients (multipliers) then the variables and find the result. (3 . 4) (a . a) (b) : Place like terms together. 12 a2b: Note that when multiplying variables, we add their exponents. Monomial With Binomial (1 Term Times 2 Terms) - Multiply the single term with each of the two terms of binomial. For example, (a2 - a) 2b (a2 -. 2b) - (a. 2b) 2a2 b - 2ab Binomial With Binomial (2 Terms Times 2 Terms) - This one is a bit longer than the first two types of polynomial multiplication. In this multiplication, each of the two terms in one binomial is multiplied with each of the two terms in the other binomial. To elaborate with an example, (a + b) (c +d) Taking the first binomial: a.c + a.d + b.c + b.d = ac + ad + bc +bd You can multiply the binomials in any order, make sure that each of the first two terms is being multiplied by each second term of the binomials.
We were all taught as children that there are 5 senses: sight, taste, sound, smell, and touch. The initial four senses utilize clear, distinct organs, such as the eyes, taste buds, ears, and nose, but just how does the body sense touch exactly? Touch is experienced over the entire body, both inside and outside. There is not one distinct organ that is responsible for sensing touch. Rather, there are tiny receptors, or nerve endings, around the entire body which sense touch where it occurs and sends signals to the brain with information regarding the type of touch that occurred. As a taste bud on the tongue detects flavor, mechanoreceptors are glands within the skin and on other organs that detect sensations of touch. They’re known as mechanoreceptors because they’re designed to detect mechanical sensations or differences in pressure. A person understands that they have experienced a sensation once the organ responsible for discovering that specific sense sends a message to the brain, which is the primary organ that processes and arranges all of the information. Messages are sent from all areas of the body to the brain through wires referred to as neurons. There are thousands of small neurons that branch out to all areas of the human body, and on the endings of many of these neurons are mechanoreceptors. To demonstrate what happens when you touch an object, we will use an example. Envision a mosquito lands on your arm. The strain of this insect, so light, stimulates mechanoreceptors in that particular area of the arm. Those mechanoreceptors send a message along the neuron they are connected to. The neuron connects all the way to the brain, which receives the message that something is touching your body in the exact location of the specific mechanoreceptor that sent the message. The brain will act with this advice. Maybe it will tell the eyes to look at the region of the arm that detected the signature. And when the eyes tell the brain that there’s a mosquito on the arm, the brain may tell the hand to quickly flick it away. That’s how mechanoreceptors work. The purpose of the article below is to demonstrate as well as discuss in detail the functional organization and molecular determinants of mechanoreceptors. Cutaneous mechanoreceptors are localized in the various layers of the skin where they detect a wide range of mechanical stimuli, including light brush, stretch, vibration and noxious pressure. This variety of stimuli is matched by a diverse array of specialized mechanoreceptors that respond to cutaneous deformation in a specific way and relay these stimuli to higher brain structures. Studies across mechanoreceptors and genetically tractable sensory nerve endings are beginning to uncover touch sensation mechanisms. Work in this field has provided researchers with a more thorough understanding of the circuit organization underlying the perception of touch. Novel ion channels have emerged as candidates for transduction molecules and properties of mechanically gated currents improved our understanding of the mechanisms of adaptation to tactile stimuli. This review highlights the progress made in characterizing functional properties of mechanoreceptors in hairy and glabrous skin and ion channels that detect mechanical inputs and shape mechanoreceptor adaptation. Keywords: mechanoreceptor, mechanosensitive channel, pain, skin, somatosensory system, touch Touch is the detection of mechanical stimulus impacting the skin, including innocuous and noxious mechanical stimuli. It is an essential sense for the survival and the development of mammals and human. Contact of solid objects and fluids with the skin gives necessary information to the central nervous system that allows exploration and recognition of the environment and initiates locomotion or planned hand movement. Touch is also very important for apprenticeship, social contacts and sexuality. Sense of touch is the least vulnerable sense, although it can be distorted (hyperesthesia, hypoesthesia) in many pathological conditions.1-3 Touch responses involve a very precise coding of mechanical information. Cutaneous mechanoreceptors are localized in the various layers of the skin where they detect a wide range of mechanical stimuli, including light brush, stretch, vibration, deflection of hair and noxious pressure. This variety of stimuli is matched by a diverse array of specialized mechanoreceptors that respond to cutaneous deformation in a specific way and relay these stimuli to higher brain structures. Somatosensory neurones of the skin fall into two groups: low-threshold mechanoreceptors (LTMRs) that react to benign pressure and high-threshold mechanoreceptors (HTMRs) that respond to harmful mechanical stimulation. LTMR and HTMR cell bodies reside within dorsal root ganglia (DRG) and cranial sensory ganglia (trigeminal ganglia). Nerve fibers associated with LTMRs and HTMRs are classified as A?-, A?- or C-fibers based on their action potential conduction velocities. C fibers are unmyelinated and have the slowest conduction velocities (~2 m/s), whereas A? and A? fibers are lightly and heavily myelinated, exhibiting intermediate (~12 m/s) and rapid (~20 m/s) conduction velocities, respectively. LTMRs are also classified as slowly, or rapidly adapting responses (SA- and RA-LTMRs) according to their rates of adaptation to sustained mechanical stimulus. They are further distinguished by the cutaneous end organs they innervate and their preferred stimuli. Ability of mechanoreceptors to detect mechanical cues relies on the presence of mechanotransducer ion channels that rapidly transform mechanical forces into electrical signals and depolarise the receptive field. This local depolarisation, called receptor potential, can generate action potentials that propagate toward the central nervous system. However, properties of molecules that mediate mechanotransduction and adaptation to mechanical forces remain unclear. In this review, we provide an overview of mammalian mechanoreceptor properties in innocuous and noxious touch in the hairy and glabrous skin. We also consider the recent knowledge about the properties of mechanically-gated currents in an attempt to explain the mechanism of mechanoreceptor’s adaptation. Finally, we review recent progress made in identifying ion channels and associated proteins responsible for the generation of mechano-gated currents. The hair follicles represent hair shaft-producing mini-organs that detect light touch. Fibers associated with hair follicles respond to hair motion and its direction by firing trains of action potentials at the onset and removal of the stimulus. They are rapidly adapting receptors. Cat and rabbit. In cat and rabbit coat, hair follicles can be divided in three hair follicle types, the Down hair, the Guard hair and the Tylotrichs. The Down hairs (underhair, wool, vellus)4 are the most numerous, the shortest and finest hairs of the coat. They are wavy, colorless and emerged in groups of two to four hairs from a common orifice in the skin. The Guard hairs (monotrichs, overhears, tophair)4 are slightly curved, either pigmented or unpigmented, and emerged singly from the mouths of their follicles. The tylotrichs are the least numerous, the longest and thickest hairs.5,6 They are pigmented or unpigmented, sometimes both and emerged singly from a follicle which is surrounded by a loop of capillary blood vessels. The sensory fibers supply to a hair follicle is located below the sebaceous gland and are attributed to A? or A?-LTMR fibers.7 In close apposition to the down hair shaft, just below the level of the sebaceous gland is the ring of lanceolate pilo-Ruffini endings. These sensory nerve endings are positioned in a spiral course around the hair shaft within the connective tissue forming the hair follicle. Within the hair follicle, there are also free nerve endings, some of them forming mechanoreceptors. Frequently, touch corpuscles (see glabrous skin) are surrounding the neck region of tylotrich follicle. Properties of myelinated nerve endings in cat and rabbit hairy skin have been explored intensively in the 1930–1970 period (review in Hamann, 1995).8 Remarkably, Brown and Iggo, studying 772 units with myelinated afferent nerve fibers in the saphenous nerves from cat and rabbit, have classified responses in three receptor types corresponding to the movements of Down hairs (type D receptors), Guard hair (type G receptors) and Tylotrich hair (type T receptor).9 All the afferent nerve fiber responses have been brought together in the Rapidly Adapting receptor of type I (RA I) by opposition to the Pacinian receptor named RA II. RA I mechanoreceptors detect velocity of mechanical stimulus and have sharp border. They do not detect thermal variations. Burgess et al. also described a rapidly adapting field receptor that responds optimally to stroking of the skin or movement of several hairs, which was attributed to stimulation of pilo-Ruffini endings. None of the hair follicle response was attributed to C fiber activity.10 Mice. In the dorsal hairy skin of mice, three major types of hair follicles have been described: zigzag (around 72%), awl/auchene (around 23%) and guard or tylotrich (around 5%).11-14 Zigzag and Awl/auchenne hair follicles produce the thinner and shorter hair shafts and are associated with one sebaceous gland. Guard or tylotrich hairs are the longest of the hair follicle types. They are characterized by a large hair bulb associated with two sebaceous glands. Guard and awl/auchene hairs are arranged in an iterative, regularly spaced pattern whereas zigzag hairs densely populate skin areas surrounding the two larger hair follicle types [Fig. 1 (A1, A2 and A3)]. Recently, Ginty and collaborators used a combination of molecular-genetic labeling and somatotopic retrograde tracing approaches to visualize the organization of peripheral and central axonal endings of the LTMRs in mice.15 Their findings support a model in which individual features of a complex tactile stimulus are extracted by the three hair follicle types and conveyed via the activities of unique combinations of A?-, A?- and C- fibers to dorsal horn. They showed that the genetic labeling of tyrosine hydroxylase positive (TH+) DRG neurones characterize a population of nonpeptidergic, small-diameter sensory neurones and allow for visualization of C-LTMR peripheral endings in the skin. Surprisingly, the axoneal branches of individual C-LTMRs were found to arborise and form longitudinal lanceolate endings that are intimately associated with zigzag (80% of endings) and awl/auchene (20% of endings), but not tylotrich hair follicles [Fig. 1 (A4)]. Longitudinal lanceolate endings have been long thought to belong exclusively to A?-LTMRs and therefore it was unexpected that the endings of C-LTMRs would form longitudinal lanceolate endings.15 These C-LTMRs have an intermediate adaptation in comparison with the slowly and rapidly adapting myelinated mechanoreceptors [Fig. 2 (C1)]. A second major population identified concerns the A?-LTMR endings in Awl/Auchenne and zigzag follicles to be compared with the Down hair follicle extensively studied in cat and rabbit. Ginty and collaborators showed that TrkB is expressed at high levels in a subset of medium-diametre DRG neurones. Intracellular recordings using the ex vivo skin-nerve preparation of labeled fibers revealed that they exhibit the physiological properties of fibers previously studied in cat and rabbit: exquisite mechanical sensitivity (Von Frey threshold < 0.07 mN), rapidly adapting responses to suprathreshold stimuli, intermediate conduction velocities (5.8 ± 0.9 m/s) and narrow uninflected soma spikes.15 These A?-LTMRs form longitudinal lanceolate endings associated with virtually every zigzag and awl/auchene hair follicle of the trunk [Fig. 1 (A5)]. Finally, they showed that the peripheral endings of rapidly adapting A? LTMRs form longitudinal lanceolate endings associated with guard (or tylotrich) and awl/auchene hair follicles [Fig. 1 (A6)].15 In addition, Guard hairs are also associated with a Merkel cell complex forming a touch dome connected to A? slowly adapting LTMR [Fig. 1 (A7)]. In summary, virtually all zigzag hair follicles are innervated by both C-LTMR and A?-LTMR lanceolate endings; awl/auchene hairs are triply innervated by A? rapidly adapting-LTMR, A?-LTMR and C-LTMR lanceolate endings; Guard hair follicles are innervated by A? rapidly adapting-LTMR longitudinal lanceolate endings and interact with A? slowly adapting-LTMR of touch dome endings. Thus, each mouse hair follicle receives unique and invariant combinations of LTMR endings corresponding to neurophysiologically distinct mechanosensory end organs. Considering the iterative arrangement of these three hair types, Ginty and collaborators propose that hairy skin consists of iterative repeat of peripheral unit containing, (1) one or two centrally located guard hairs, (2) ~20 surrounding awl/auchenne hairs and (3) ~80 interspersed zigzag hairs [Fig. 2 (C1)]. Spinal cord projection. The central projections of A? rapidly adapting-LTMRs, A?-LTMRs and C-LTMRs terminate in distinct, but partially overlapping laminae (II, III, IV) of the spinal cord dorsal horn. In addition, the central terminals of LTMRs that innervate the same or adjacent hair follicles within a peripheral LTMR unit are aligned to form a narrow LTMR column in the spinal cord dorsal horn [Fig. 1 (B1)]. Thus, it appears likely that a wedge, or column of somatotopically organized primary sensory afferent endings in the dorsal horn represents the alignment of the central projections of A?-, A?- and C-LTMRs that innervate the same peripheral unit and detect mechanical stimuli acting upon the same small group of hairs follicles. Based on the numbers of guard, awl/ auchene and zigzag hairs of the trunk and limbs and the numbers of each LTMR subtype, Ginty and collaborators estimate that the mouse dorsal horn contains 2,000–4,000 LTMR columns, which corresponds to the approximate number of peripheral LTMR units.15 Furthermore, axones of LTMR subtypes are closely associated with one another, having entwined projections and interdigitated lanceolate endings that innervate the same hair follicle. In addition, because the three hair follicle types exhibit different shapes, sizes and cellular compositions, they are likely to have distinct deflectional or vibrational tuning properties. These findings are consistent with classic neurophysiological measurements in the cat and rabbit indicating that A? RA-LTMRs and A?-LTMRs can be differentially activated by deflection of distinct hair follicle types.16,17 In conclusion, touch in hairy skin is the combination of: (1) the relative numbers, unique spatial distributions and distinct morphological and deflectional properties of the three types of hair follicles; (2) the unique combinations of LTMR subtype endings associated with each of the three hair follicle types; and (3) distinct sensitivities, conduction velocities, spike train patterns and adaptation properties of the four main classes of hair-follicle-associated LTMRs that enable the hairy skin mechanosensory system to extract and convey to the CNS the complex combinations of qualities that define a touch. Generally, C-fibers free endings in the skin are HTMRs, but a subpopulation of C-fibers doesn’t respond to noxious touch. This subset of tactile C-fiber (CT) afferents represents a distinct type of unmyelinated, low-threshold mechanoreceptive units existing in the hairy but not glabrous skin of humans and mammals [Fig. 1 (A8)].18,19 CTs are generally associated with the perception of pleasant tactile stimulation in body contact.20,21 CT afferents respond to indentation forces in the range 0.3–2.5 mN and are thus as sensitive to skin deformation as many of the A? afferents.19 The adaptation characteristics of CT afferents are thus intermediate in comparison with the slowly and rapidly adapting myelinated mechanoreceptors. The receptive fields of human CT afferents are roughly round or oval in shape. The field consists of one to nine small responsive spots distributed over an area up to 35 mm2.22 The mouse homolog receptors are organized in a pattern of discontinuous patches covering about 50–60% of the area in the hairy skin [Fig. 2 (C2)].23 Evidence from patients lacking myelinated tactile afferents indicates that signaling in CT fibers activate the insular cortex. Since this system is poor in encoding discriminative aspects of touch, but well-suited to encoding slow, gentle touch, CT fibers in hairy skin may be part of a system for processing pleasant and socially relevant aspects of touch.24 CT fiber activation may also have a role in pain inhibition and it has recently been proposed that inflammation or trauma may change the sensation conveyed by C-fiber LTMRs from pleasant touch to pain.25,26 Which pathway CT-afferents travel is not yet known [Fig. 1 (B2)], but low-threshold tactile inputs to spinothalamic projection cells have been documented,27 lending credence to reports of subtle, contralateral deficits of touch detection in human patients following destruction of these pathways after chordotomy procedures.28 Merkel cell-neurite complexes and touch dome. Merkel (1875) was the first to give a histological description of clusters of epidermal cells with large lobulated nuclei, making contact with presumed afferent nerve fibers. He assumed that they subserved sense of touch by calling them Tastzellen (tactile cells). In humans, Merkel cell–neurite complexes are enriched in touch sensitive areas of the skin, they are found in the basal layer of the epidermis in fingers, lips and genitals. They also exist in hairy skin at lower density. The Merkel cell–neurite complex consists of a Merkel cell in close apposition to an enlarged nerve terminal from a single myelinated A? fiber [Fig. 1 (C1)] (review in Halata and collaborators).29 At the epidermal side Merkel cell exhibits finger-like processes extending between neighboring keratinocytes [Fig. 1 (C2)]. Merkel cells are keratinocyte-derived epidermal cells.30,31 The term of touch dome was introduced to name the large concentration of Merkel cell complexes in the hairy skin of cat forepaw. A touch dome could have up to 150 Merkel cells innervated by a single A?-fiber and in humans besides A?-fibers, A? and C-fibers were also regularly present.32-34 Stimulation of Merkel cell–neurite complexes results in slowly-adapting Type I (SA I) responses, which originate from punctuate receptive fields with sharp borders. There is no spontaneous discharge. These complexes respond to indentation depth of the skin and have the highest spatial resolution (0.5 mm) of the cutaneous mechanoreceptors. They transmit a precise spatial image of tactile stimuli and are proposed to be responsible for shape and texture discrimination [Fig. 2 (B1)]. Mice devoid of Merkel cells cannot detect textured surfaces with their feet while they do so using their whiskers.35 Whether the Merkel cell, the sensory neuron or both are sites of mechanotransduction is still a matter of debate. In rats, phototoxic destruction of Merkel cells abolishes SA I response.36 In mice with genetically suppressed-Merkel cells, the SA I response recorded in ex vivo skin/nerve preparation completely disappeared, demonstrating that Merkel cells are required for the proper encoding of Merkel receptor responses.37 However, the mechanical stimulation of isolated Merkel cells in culture by motor driven pressure does not generate mechanically-gated currents.38,39 Keratinocytes may play an important role in the normal functioning of the Merkel cell–neurite complex. The Merkel cell finger-like processes can move with skin deformation and epidermis cell movement, and this may be the first step of mechanical transduction. Clearly, the conditions required to study mechano-sensitivity of Merkel cells have yet to be established. Ruffini endings. Ruffini endings are thin cigar-shaped encapsulated sensory endings connected to A? nerve endings. Ruffini endings are small connective tissue cylinders arranged along dermal collagen strands which are supplied by one to three myelinated nerve fibers of 4–6 µm diametre. Up to three cylinders of different orientation in the dermis may merge to form one receptor [Fig. 1 (C3)]. Structurally, Ruffini endings are similar to Golgi tendon organs. They are broadly expressed in the dermis and have been identified as the slowly adapting type II (SA II) cutaneous mechanoreceptors. Against the background of spontaneous nervous activity, a slowly-adapting regular discharge is elicited by perpendicular low force maintained mechanical stimulation or more effectively by dermal stretch. SA II response originates from large receptive fields with obscure borders. Ruffini receptors contribute to the perception of the direction of object motion through the pattern of skin stretch [Fig. 2 (A2)]. In mice, SA I and SA II responses can be separated electrophysiologically in ex-vivo nerve-skin preparation.40 Nandasena and collaborators reported the immunolocalization of aquaporin 1 (AQP1) in the periodontal Ruffini endings of the rat incisors suggesting that AQP1 is involved in the maintenance of the dental osmotic balance necessary for the mechanotransduction.41 The periodontal Ruffini endings also expressed the putative mechanosensitive ion channel ASIC3.42 Meissner corpuscles. Meissner corpuscles are localized in the dermal papillae of the glabrous skin, mainly in hand palms and foot soles but also in lips, in tongue, in face, in nipples and in genitals. Anatomically, they consist of an encapsulated nerve ending, the capsule being made of flattened supportive cells arranged as horizontal lamellae embedded in connective tissue. There is one single nerve fiber A? afferents connected per corpuscle [Fig. 1 (C4)]. Any physical deformation of the corpuscle triggers a volley of action potentials that quickly ceases, i.e., they are rapidly adapting receptors. When the stimulus is removed, the corpuscle regains its shape and while doing so produces another volley of action potentials. Due to their superficial location in the dermis, these corpuscles selectively respond to skin motion, tactile detection of slip and vibrations (20–40 Hz). They are sensitive to dynamic skin – for example, between the skin and an object that is being handled [Fig. 2 (A1)]. Pacinian corpuscles. Pacinian corpuscles are the deeper mechanoreceptors of the skin and are the most sensitive encapsulated cutaneous mechanoreceptor of skin motion. These large ovoid corpuscles (1 mm in length) made of concentric lamellae of fibrous connective tissue and fibroblasts lined by flat modified Schwann cells are expressed in the deep dermis.43 In the center of the corpuscle, in a fluid-filled cavity called inner bulb, terminates one single A? afferent unmyelinated nerve ending [Fig. 1 (C5)]. They have a large receptive field on the skin’s surface with a particularly sensitive center. The development and function of several rapidly adapting mechanoreceptor types are disrupted in c-Maf mutant mice. In particular, Pacinian corpuscles are severely atrophied.44 Pacinian corpuscles display very rapid adaptation in response to the indentation of the skin, the rapidly-adapting II (RA II) nervous discharge that are capable of following high frequency of vibratory stimuli, and allow perception of distant events through transmitted vibrations.45 Pacinian corpuscle afferents respond to sustained indentation with transient activity at the onset and offset of the stimulus. They are also called acceleration detectors because they can detect changes in the strength of the stimulus and, if the rate of change in the stimulus is altered (as happens in vibrations), their response becomes proportional to this change. Pacinian corpuscles sense gross pressure changes and most of all vibrations (150–300 Hz), which they can detect even centimeters away [Fig. 2 (A3)]. Tonic response was observed in decapsulated Pacinian corpuscle.46 In addition, intact Pacinian corpuscles respond with sustained activity during constant indentation stimuli, without altering mechanical thresholds or response frequency when GABA-mediated signaling is blocked between lamellate glia and a nerve ending.47 Thus, the non-neuronal components of the Pacinian corpuscle may have dual roles in filtering the mechanical stimulus as well as in modulating the response properties of the sensory neurone. Spinal cord projections. Projections of the A?-LTMRs in the spinal cord are divided in two branches. The principal central branch ascends in the spinal cord in the ipsilateral dorsal columns to the cervical level [Fig. 1 (B3)]. Secondary branches terminate in the dorsal horn in the laminae IV and interfere with the pain transmission, for example. This may attenuate pain as a part of the gate control [Fig. 1 (B4)].48 At cervical levels, axones of the principal branch separate in two tracts: the midline tract comprises the gracile fascicle conveying information from the lower half of the body (legs and trunk), and the outer tract comprises the cuneate fascicle conveying information from the upper half of the body (arms and trunk) [Fig. 1 (B5)]. Primary tactile afferents make their first synapse with second order neurones at the medulla where fibers from each tract synapse in a nucleus of the same name: the gracile fasciculus axones synapse in the gracile nucleus and the cuneate axones synapse in the cuneate nucleus [Fig. 1 (B6)]. Neurones receiving the synapse provide the secondary afferents and cross the midline immediately to form a tract on the contralateral side of the brainstem—the medial lemniscus—which ascends through the brainstem to the next relay station in the midbrain, specifically, in the thalamus [Fig. 1 (B7)]. Molecular specification of LTMRs. Molecular mechanisms controlling the early diversification of LTMRs have been recently partly elucidated. Bourane and collaborators have shown that the neuronal populations expressing the Ret tyrosine kinase receptor (Ret) and its co-receptor GFR?2 in E11–13 embryonic mice DRG selectively coexpress the transcription factor Mafa.49,50 These authors demonstrate that the Mafa/Ret/GFR?2 neurones destined to become three specific types of LTRMs at birth: the SA1 neurones innervating Merkel-cell complexes, the rapidly adapting neurones innervating Meissner corpuscles and the rapidly adapting afferents (RA I) forming lanceolate endings around hair follicles. Ginty and collaborators also report that DRG neurones expressing early-Ret are rapidly adapting mechanoreceptors from Meissner corpuscles, Pacinian corpuscles and lanceolate endings around hair follicles.51 They innervate discrete target zones within the gracile and cuneate nuclei, revealing a modality-specific pattern of mechanosensory neurone axonal projections within the brainstem. Exploration of human skin mechanoreceptors. The technique of “microneurography” described by Hagbarth and Vallbo in 1968 has been applied to study the discharge behavior of single human mechanosensitive endings supplying muscle, joint and skin (see for review Macefield, 2005).52,53 The majority of human skin microneurography studies have characterized the physiology of tactile afferents in the glabrous skin of the hand. Microelectrode recordings from the median and ulnar nerves in human subjects have revealed touch sensation generated by the four classes of LTMRs: Meissner afferents are particularly sensitive to light stroking across the skin, responding to local shear forces and incipient or overt slips within the receptive field. Pacinian afferents are exquisitively sensitive to brisk mechanical transients. Afferents respond vigorously to blowing over the receptive field. A Pacinian corpuscle located in a digit will usually respond to tapping the table supporting the arm. Merkel afferents characteristically have a high dynamic sensitivity to indentation stimuli applied to a discrete area and often respond with an off-discharge during release. Although the Ruffini afferents do respond to forces applied normally to the skin, a unique feature of SA II afferents is their capacity to respond also to lateral skin stretch. Finally, hair units in the forearm have large ovoid or irregular receptive fields composed of multiple sensitive spots that corresponded to individual hairs (each afferent supply ~20 hairs). Any mechanical stimulus on the skin must be transmitted through keratinocytes that form the epidermis. These ubiquitous cells may perform signaling functions in addition to their supportive or protective roles. For example, keratinocytes secrete ATP, an important sensory signaling molecule, in response to mechanical and osmotic stimuli.54,55 The release of ATP induces intracellular calcium increase by autocrine stimulation of purinergic receptors.55 Furthermore, there is evidence that hypotonicity activates the Rho-kinase signaling pathway and the subsequent F-actin stress fiber formation suggesting that the mechanical deformation of the keratinocytes may mechanically interfere with the neighbor cells such as Merkel cells for innocuous touch and C-fiber free endings for noxious touch [Fig. 1 (C6)].56,57 High threshold mechanoreceptors (HTMRs) are epidermal C- and A? free nerve-endings. They are not associated with specialized structures and are observed in both hairy skin [Fig. 1 (A9)] and glabrous skin [Fig. 1(C7)]. However, the term of free nerve-ending has to be considered prudently since nerve endings are always in close apposition with keratinocyte or Langherans’ cell or melanocytes. Ultrastructural analysis of nerve endings reveals the presence of rough endoplasmic reticulum, abundant mitochondria and dense-core vesicle. Adjacent membranes of epidermal cells are thickened and resembling post-synaptic membrane in nervous tissues. Note that the interactions between nerve endings and epidermal cells may be bidirectional since epidermal cells may release mediators as ATP, interleukine (IL6, IL10) and bradykinin and conversely peptidergic nerve endings may release peptides such as CGRP or substance P acting on epidermal cells. HTMRs comprise mechano-nociceptors excited only by noxious mechanical stimuli and polymodal nociceptors that also respond to noxious heat and exogenous chemical [Fig. 2 (B2)].58 HTMR afferent fibers terminate on projection neurones in the dorsal horn of the spinal cord. A?-HTMRs contact second order neurones predominantly in the lamina I and V, whereas C-HTMRs terminate in the lamina II [Fig. 1 (B8)]. Second order nociceptive neurones project to the controlateral side of the spinal cord and ascend in the white matter, forming the anterolateral system. These neurones terminate mainly in the thalamus [Fig. 1 (B9 and B10)]. The mechanisms of slow or rapid adaptation of mechanoreceptors are not yet elucidated. It is not clear to what extent mechanoreceptor adaptation is provided by the cellular environment of the sensory nerve ending, the intrinsic properties of the mechanically-gated channels and the properties of the axonal voltage-gated ion channels in sensory neurones (Fig. 2). However, recent progress in the characterization of mechanically-gated currents has demonstrated that different classes of mechanosensitive channels exist in DRG neurones and may explain some aspects of the adaptation of mechanoreceptors. In vitro recording in rodents has shown that the soma of DRG neurons is intrinsically mechanosensitive and express cationic mechano-gated currents.59-64 Gadolinium and ruthenium red fully block mechanosensitive currents, whereas external calcium and magnesium, at physiological concentrations, as well as amiloride and benzamil, cause partial block.60,62,63 FM1-43 acts as a lasting blocker, and the injection of FM1-43 into the hind paw of mice decreases pain sensitivity in the Randall–Selitto test and increases the paw withdrawal threshold assessed with von Frey hairs.65 In response to sustained mechanical stimulation, mechanosensitive currents decline through closure. Based on the time constants of current decay, four distinct types of mechanosensitive currents have been distinguished: rapidly adapting currents (~3–6 ms), intermediately adapting currents (~15–30 ms), slowly adapting currents (~200–300 ms) and ultra-slowly adapting currents (~1000 ms).64 All these currents are present with variable incidence in rat DRG neurones innervating the glabrous skin of the hindpaw.64 The mechanical sensitivity of mechanosensitive currents can be determined by applying a series of incremental mechanical stimuli, allowing for relatively detailed stimulus-current analysis.66 The stimulus–current relationship is typically sigmoidal, and the maximum amplitude of the current is determined by the number of channels that are simultaneously open.64,67 Interestingly, the rapidly adapting mechanosensitive current has been reported to display low mechanical threshold and half-activation midpoint compared with the ultra-slowly adapting mechanosensitive current.63,65 Sensory neurones with non-nociceptive phenotypes preferentially express rapidly adapting mechanosensitive currents with lower mechanical threshold.60,61,63,64,68 Conversely, slowly and ultra-slowly adapting mechanosensitive currents are occasionally reported in putative non-nociceptive cells.64,68 This prompted suggestion that these currents might contribute to the different mechanical thresholds seen in LTMRs and HTMRs in vivo. Although these in vitro experiments should be taken with caution, support for the presence in the soma of the DRG neurones of low- and high-threshold mechanotransducers was also provided by radial stretch-based stimulation of cultured mouse sensory neurones.69 This paradigm revealed two main populations of stretch-sensitive neurones, one that responds to low stimulus amplitude and another one that selectively responds to high stimulus amplitude. These results have important, yet speculative, mechanistic implications: the mechanical threshold of sensory neurones might have little to do with the cellular organization of the mechanoreceptor but may lie in the properties of the mechanically-gated ion channels. The mechanisms that underlie desensitization of mechanosensitive cation currents in rat DRG neurones have been recently unraveled.64,67 It results from two concurrent mechanisms that affect channel properties: adaptation and inactivation. Adaptation was first reported in auditory hair cell studies. It can be described operationally as a simple translation of the transducer channel’s activation curve along the mechanical stimulus axis.70-72 Adaptation allows sensory receptors to maintain their sensitivity to new stimuli in the presence of an existing stimulus. However, a substantial fraction of mechanosensitive currents in DRG neurones cannot be reactivated following conditioning mechanical stimulation, indicating inactivation of some transducer channels.64,67 Therefore, both inactivation and adaptation act in tandem to regulate mechanosensitive currents. These two mechanisms are common to all mechanosensitive currents identified in rat DRG neurones, suggesting that related physicochemical elements determine the kinetics of these channels.64 In conclusion, determining the properties of endogenous mechanosensitive currents in vitro is crucial in the quest to identify transduction mechanisms at the molecular level. The variability observed in the mechanical threshold and the adapting kinetics of the different mechanically-gated currents in DRG neurones suggest that intrinsic properties of ion channels may explain, at least in part, mechanical threshold and adaptation kinetics of the mechanoreceptors described in the decades 1960–80 using ex vivo preparations. Mechanosensitive ion currents in somatosensory neurones are well characterized, by contrast, little is known about the identity of molecules that mediate mechanotransduction in mammals. Genetic screens in Drosophila and C. elegans have identified candidate mechanotransduction molecules, including the TRP and degenerin/epithelial Na+ channel (Deg/ENaC) families.73 Recent attempts to elucidate the molecular basis of mechanotransduction in mammals have largely focused on homologs of these candidates. Additionally, many of these candidates are present in cutaneous mechanoreceptors and somatosensory neurones (Fig. 2). ASICs belong to a proton-gated subgroup of the degenerin–epithelial Na+ channel family.74 Three members of the ASIC family (ASIC1, ASIC2 and ASIC3) are expressed in mechanoreceptors and nociceptors. The role of ASIC channels has been investigated in behavioral studies using mice with targeted deletion of ASIC channel genes. Deletion of ASIC1 does not alter the function of cutaneous mechanoreceptors but increases mechanical sensitivity of afferents innervating the gut.75 ASIC2 knockout mice exhibit a decreased sensitivity of rapidly adapting cutaneous LTMRs.76 However, subsequent studies reported a lack of effects of knocking out ASIC2 on both visceral mechano-nociception and cutaneous mechanosensation.77 ASIC3 disruption decreases mechano sensitivity of visceral afferents and reduces responses of cutaneous HTMRs to noxious stimuli.76 THE TRP superfamily is subdivided into six subfamilies in mammals.78 Nearly all TRP subfamilies have members linked to mechanosensation in a variety of cell systems.79 In mammalian sensory neurones, however, TRP channels are best known for sensing thermal information and mediating neurogenic inflammation, and only two TRP channels, TRPV4 and TRPA1, have been implicated in touch responsiveness. Disrupting TRPV4 expression in mice has only modest effects on acute mechanosensory thresholds, but strongly reduces sensitivity to noxious mechanical stimuli.80,81 TRPV4 is a crucial determinant in shaping the response of nociceptive neurones to osmotic stress and to mechanical hyperalgesia during inflammation.82,83 TRPA1 seems to have a role in mechanical hyperalgesia. TRPA1-deficient mice exhibit pain hypersensitivity. TRPA1 contributes to the transduction of mechanical, cold and chemical stimuli in nociceptor sensory neurones but it appears that is not essential for hair-cell transduction.84,85 There is no clear evidence indicating that TRP channels and ASICs channels expressed in mammals are mechanically gated. None of these channels expressed heterologously recapitulates the electrical signature of mechanosensitive currents observed in their native environment. This does not rule out the possibility that ASICs and TRPs channels are mechanotransducers, given the uncertainty of whether a mechanotransduction channel may function outside of its cellular context (see section on SLP3). Piezo protiens have been recently identified like as promising candidates for mechanosensing proteins by Coste and collaborators.86,87 Vertebrates have two Piezo members, Piezo 1 and Piezo 2, previously known as FAM38A and FAM38B, respectively, which are well conserved throughout multi cellular eukaryotes. Piezo 2 is abundant in DRGs, whereas Piezo 1 is barely detectable. Piezo-induced mechanosensitive currents are prevented inhibited by gadolinium, ruthenium red and GsMTx4 (a toxin from the tarantula Grammostola spatulata).88 Expression of Piezo 1 or Piezo 2 in heterologous systems produces mechanosensitive currents, the kinetics of inactivation of Piezo 2 current being faster than Piezo 1. Similar to endogenous mechanosensitive currents, Piezo-dependent currents have reversal potentials around 0 mV and are cation no selective, with Na+, K+, Ca2+ and Mg2+ all permeating the underlying channel. Likewise, piezo-dependent currents are regulated by membrane potential, with a marked slowing of current kinetics at depolarized potentials.86 Piezo proteins are undoubtedly mechanosensing proteins and share many properties of rapidly adapting mechanosensitive currents in sensory neurones. Treatment of cultured DRG neurones with Piezo 2 short interfering RNA decreased the proportion of neurones with rapidly adapting current and decreased the percentage of mechanosensitive neurones.86 Transmembrane domains are located throughout the piezo proteins but no obvious pore-containing motifs or ion channel signatures have been identified. However, mouse Piezo 1 protein purified and reconstituted into asymmetric lipid bilayers and liposome forms ion channels sensitive to ruthenium red.87 An essential step in validating mechanotransduction through Piezo channels is to use in vivo approaches to determine the functional importance in touch signaling. Information was given in Drosophila where deletion of the single Piezo member reduced mechanical response to noxious stimuli, without affecting normal touch.89 Although their structure remains to be determined, this novel family of mechanosensitive proteins is a promising subject for future research, beyond the border of touch sensation. For exemple, a recent study on patients with anemia (hereditary xerocytosis) shows the role of Piezo 1 in maintaining erythrocyte volume homeostasis.90 A recent study indicates that two proteins, TMC1 and TMC2, are necessary for hair cell mechanotransduction.91 Hereditary deafness due to TMC1 gene mutation was reported in human and mice.92,93 Presence of these channels had not yet been shown in the somatosensory system, but it seems to be a good lead to investigate. Additionally to the transduction channels, some accessory proteins linked to the channel have been shown to play a role in touch sensivity. SLP3 is expressed in mammalian DRG neurones. Studies using mutant mice lacking SLP3 had shown change in mechanosensation and mechanosentive currents.94,95 SLP3 precise function remains unknown. It may be a linker between the mechanosensitive channel and the underlying microtubules, as proposed for its C. elegans homolog MEC2.96 Recently GR. Lewin lab has suggested that a tether is synthesized by DRG sensory neurones and links mechanosensitive ion channel to the extracellular matrix.97 Disrupting the link abolishes the RA-mechanosensitive current suggesting that some ion channels are mechanosensitive only when tethered. RA-mechanosensitive currents are also inhibited by laminin-332, a matrix protein produced by keratinocytes, reinforcing the hypothesis of a modulation of the mechanosensitive current by extracellular proteins.98 In parallel to cationic depolarizing mechanosensitive currents, the presence of repolarizing mechanosensitive K+ currents is under investigation. K+ channels in mechanosensitive cells can step in the current balance and contribute to define the mechanical threshold and the time course of adaptation of mechanoreceptors. KCNK members belong to the two-pore domain K+ channel (K2P) family.99,100 The K2P display a remarkable range of regulation by cellular, physical and pharmacological agents, including pH changes, heat, stretch and membrane deformation. These K2P are active at resting membrane potential. Several KCNK subunits are expressed in somatosensory neurones.101 KCNK2 (TREK-1), KCNK4 (TRAAK) and TREK-2 channels are among the few channels for which a direct mechanical gating by membrane stretch has been shown.102,103 Mice with a disrupted KCNK2 gene displayed an enhanced sensitivity to heat and mild mechanical stimuli but a normal withdrawal threshold to noxious mechanical pressure applied to the hindpaw using the Randall–Selitto test.104 KCNK2-deficient mice also displays increased thermal and mechanical hyperalgesia in inflammatory conditions. KCNK4 knockout mice were hypersensitive to mild mechanical stimulation, and this hypersensitivity was increased by additional inactivation of KCNK2.105 Increased mechanosensitivity of these knockout mice could mean that stretch normally activates both depolarizing and repolarizing mechanosensitive currents in a coordinated way, similarly to the unbalance of depolarizing and repolarizing voltage-gated currents. KCNK18 (TRESK) is a major contributor to the background K+ conductance that regulates the resting membrane potential of somatosensory neurones.106 Although it is not known if KCNK18 is directly sensitive to mechanical stimulation, it may play a role in mediating responses to light touch, as well as painful mechanical stimuli. KCNK18 and to a lesser extent KCNK3, are proposed to be the molecular target of hydroxy-?-sanshool, a compound found in Schezuan peppercorns that activates touch receptors and induces a tingling sensation in humans.107,108 The voltage dependent K+ channel KCNQ4 (Kv7.4) is crucial for setting the velocity and frequency preference of a subpopulation of rapidly adapting mechanoreceptors in both mice and humans. Mutation of KCNQ4 has been initially associated with a form of hereditary deafness. Interestingly a recent study localizes KCNQ4 in the peripheral nerve endings of cutaneous rapidly adapting hair follicle and Meissner corpuscle. Accordingly, loss of KCNQ4 function leads to a selective enhancement of mechanoreceptor sensitivity to low-frequency vibration. Notably, people with late-onset hearing loss due to dominant mutations of the KCNQ4 gene show enhanced performance in detecting small-amplitude, low-frequency vibration.109 Dr. Alex Jimenez’s Insight Touch is considered to be one of the most complex senses in the human body, particularly because there is no specific organ in charge of it. Instead, the sense of touch occurs through sensory receptors, known as mechanoreceptors, which are found across the skin and respond to mechanical pressure or distortion. There are four main types of mechanoreceptors in the glabrous, or hairless, skin of mammals: lamellar corpuscles, tactile corpuscles, Merkel nerve endings and bulbous corpuscles. Mechanoreceptors function in order to allow the detection of touch, in order to monitor the position of the muscles, bones and joints, known as proprioception, and even to detect sounds and the motion of the body. Understanding the mechanisms of structure and function of these mechanoreceptors is a fundamental element in the utilization of treatments and therapies for pain management. Touch is a complex sense because it represents different tactile qualities, namely, vibration, shape, texture, pleasure and pain, with different discriminative performances. Up to now, the correspondence between a touch-organ and the psychophysical sense was correlative and class-specific molecular markers are just emerging. The development of rodent tests matching the diversity of touch behavior is now required to facilitate future genomics identification. The use of mice that lack specific subsets of sensory afferent types will greatly facilitate identification of mechanoreceptors and sensory afferent fibers associated with a particular touch modality. Interestingly, a recent paper opens the important question of the genetic basis of mechanosensory traits in human and suggests that single gene mutation could negatively influence touch sensitivity.110 This underlines that the pathophysiology of the human touch deficit is in a large part unknown and would certainly progress by identifying precisely the subset of sensory neurones linked to a touch modality or a touch deficit. In return, progress has been made to define the biophysical properties of the mechano-gated currents.64 The development of new techniques in recent years, allowing monitoring of membrane tension changes, while recording mechano-gated current, has proved valuable experimental method to describe mechanosensitive currents with rapid, intermediate and slow adaptation (reviewed in Delmas and collaborators).66,111 The future will be to determine the role of the current properties in the mechanisms of adaptation of functionally diverse mechanoreceptors and the contribution of mechanosensitive K+ currents to the excitability of LTMRs and HTMRs. The molecular nature of mechano-gated currents in mammals is also a future promising research topic. Future research will progress in two perspectives, first to determine the role of accessory molecule that tether channels to the cytoskeleton and would be required to confer or regulate mechanosensitivity of ion channels of the like of TRP and ASIC/EnaC families. Second, to investigate the large and promising area of the contribution of the Piezo channels by answering key questions, relative to the permeation and gating mechanisms, the subset of sensory neurones and touch modalities involving Piezo and the role of Piezo in non neuronal cells associated with mechanosensation. The sense of touch, in comparison to that of sight, taste, sound and smell, which utilize specific organs to process these sensations, can occur all throughout the body through tiny receptors known as mechanoreceptors. Different types of mechanoreceptors can be found in various layers of the skin, where they can detect a wide array of mechanical stimulation. The article above describes specific highlights which demonstrate the progress of structural and functional mechanisms of mechanoreceptors associated with the sense of touch. Information referenced from the National Center for Biotechnology Information (NCBI). The scope of our information is limited to chiropractic as well as to spinal injuries and conditions. To discuss the subject matter, please feel free to ask Dr. Jimenez or contact us at 915-850-0900 . Curated by Dr. Alex Jimenez 1. Moriwaki K, Yuge O. Topographical features of cutaneous tactile hypoesthetic and hyperesthetic abnormalities in chronic pain. Pain. 1999;81:1–6. doi: 10.1016/S0304-3959(98)00257-7. [PubMed] [Cross Ref] 2. Shim B, Kim DW, Kim BH, Nam TS, Leem JW, Chung JM. Mechanical and heat sensitization of cutaneous nociceptors in rats with experimental peripheral neuropathy. Neuroscience. 2005;132:193–201. doi: 10.1016/j.neuroscience.2004.12.036. [PubMed] [Cross Ref] 3. Kleggetveit IP, Jørum E. Large and small fiber dysfunction in peripheral nerve injuries with or without spontaneous pain. J Pain. 2010;11:1305–10. doi: 10.1016/j.jpain.2010.03.004. [PubMed] [Cross Ref] 4. Noback CR. Morphology and phylogeny of hair. Ann N Y Acad Sci. 1951;53:476–92. doi: 10.1111/j.1749-6632.1951.tb31950.x. [PubMed] [Cross Ref] 5. Straile WE. Atypical guard-hair follicles in the skin of the rabbit. Nature. 1958;181:1604–5. doi: 10.1038/1811604a0. [PubMed] [Cross Ref] 6. Straile WE. The morphology of tylotrich follicles in the skin of the rabbit. Am J Anat. 1961;109:1–13. doi: 10.1002/aja.1001090102. [PubMed] [Cross Ref] 7. Millard CL, Woolf CJ. Sensory innervation of the hairs of the rat hindlimb: a light microscopic analysis. J Comp Neurol. 1988;277:183–94. doi: 10.1002/cne.902770203. [PubMed] [Cross Ref] 8. Hamann W. Mammalian cutaneous mechanoreceptors. Prog Biophys Mol Biol. 1995;64:81–104. doi: 10.1016/0079-6107(95)00011-9. [Review] [PubMed] [Cross Ref] 9. Brown AG, Iggo A. A quantitative study of cutaneous receptors and afferent fibres in the cat and rabbit. J Physiol. 1967;193:707–33. [PMC free article] [PubMed] 10. Burgess PR, Petit D, Warren RM. Receptor types in cat hairy skin supplied by myelinated fibers. J Neurophysiol. 1968;31:833–48. [PubMed] 11. Driskell RR, Giangreco A, Jensen KB, Mulder KW, Watt FM. Sox2-positive dermal papilla cells specify hair follicle type in mammalian epidermis. Development. 2009;136:2815–23. doi: 10.1242/dev.038620. [PMC free article] [PubMed] [Cross Ref] 12. Hussein MA. The overall pattern of hair follicle arrangement in the rat and mouse. J Anat. 1971;109:307–16. [PMC free article] [PubMed] 13. Vielkind U, Hardy MH. Changing patterns of cell adhesion molecules during mouse pelage hair follicle development. 2. Follicle morphogenesis in the hair mutants, Tabby and downy. Acta Anat (Basel) 1996;157:183–94. doi: 10.1159/000147880. [PubMed] [Cross Ref] 14. Hardy MH, Vielkind U. Changing patterns of cell adhesion molecules during mouse pelage hair follicle development. 1. Follicle morphogenesis in wild-type mice. Acta Anat (Basel) 1996;157:169–82. doi: 10.1159/000147879. [PubMed] [Cross Ref] 15. Li L, Rutlin M, Abraira VE, Cassidy C, Kus L, Gong S, et al. The functional organization of cutaneous low-threshold mechanosensory neurons. Cell. 2011;147:1615–27. doi: 10.1016/j.cell.2011.11.027. [PMC free article] [PubMed] [Cross Ref] 16. Brown AG, Iggo A. A quantitative study of cutaneous receptors and afferent fibres in the cat and rabbit. J Physiol. 1967;193:707–33. [PMC free article] [PubMed] 17. Burgess PR, Petit D, Warren RM. Receptor types in cat hairy skin supplied by myelinated fibers. J Neurophysiol. 1968;31:833–48. [PubMed] 18. Vallbo A, Olausson H, Wessberg J, Norrsell U. A system of unmyelinated afferents for innocuous mechanoreception in the human skin. Brain Res. 1993;628:301–4. doi: 10.1016/0006-8993(93)90968-S. [PubMed] [Cross Ref] 19. Vallbo AB, Olausson H, Wessberg J. Unmyelinated afferents constitute a second system coding tactile stimuli of the human hairy skin. J Neurophysiol. 1999;81:2753–63. [PubMed] 20. Hertenstein MJ, Keltner D, App B, Bulleit BA, Jaskolka AR. Touch communicates distinct emotions. Emotion. 2006;6:528–33. doi: 10.1037/1528-35188.8.131.528. [PubMed] [Cross Ref] 21. McGlone F, Vallbo AB, Olausson H, Loken L, Wessberg J. Discriminative touch and emotional touch. Can J Exp Psychol. 2007;61:173–83. doi: 10.1037/cjep2007019. [PubMed] [Cross Ref] 22. Wessberg J, Olausson H, Fernström KW, Vallbo AB. Receptive field properties of unmyelinated tactile afferents in the human skin. J Neurophysiol. 2003;89:1567–75. doi: 10.1152/jn.00256.2002. [PubMed] [Cross Ref] 23. Liu Q, Vrontou S, Rice FL, Zylka MJ, Dong X, Anderson DJ. Molecular genetic visualization of a rare subset of unmyelinated sensory neurons that may detect gentle touch. Nat Neurosci. 2007;10:946–8. doi: 10.1038/nn1937. [PubMed] [Cross Ref] 24. Olausson H, Lamarre Y, Backlund H, Morin C, Wallin BG, Starck G, et al. Unmyelinated tactile afferents signal touch and project to insular cortex. Nat Neurosci. 2002;5:900–4. doi: 10.1038/nn896. [PubMed] [Cross Ref] 25. Olausson H, Wessberg J, Morrison I, McGlone F, Vallbo A. The neurophysiology of unmyelinated tactile afferents. Neurosci Biobehav Rev. 2010;34:185–91. doi: 10.1016/j.neubiorev.2008.09.011. [Review] [PubMed] [Cross Ref] 26. Krämer HH, Lundblad L, Birklein F, Linde M, Karlsson T, Elam M, et al. Activation of the cortical pain network by soft tactile stimulation after injection of sumatriptan. Pain. 2007;133:72–8. doi: 10.1016/j.pain.2007.03.001. [PubMed] [Cross Ref] 27. Applebaum AE, Beall JE, Foreman RD, Willis WD. Organization and receptive fields of primate spinothalamic tract neurons. J Neurophysiol. 1975;38:572–86. [PubMed] 28. White JC, Sweet WH. Effectiveness of chordotomy in phantom pain after amputation. AMA Arch Neurol Psychiatry. 1952;67:315–22. [PubMed] 29. Halata Z, Grim M, Bauman KI. Friedrich Sigmund Merkel and his “Merkel cell”, morphology, development, and physiology: review and new results. Anat Rec A Discov Mol Cell Evol Biol. 2003;271:225–39. doi: 10.1002/ar.a.10029. [PubMed] [Cross Ref] 30. Morrison KM, Miesegaes GR, Lumpkin EA, Maricich SM. Mammalian Merkel cells are descended from the epidermal lineage. Dev Biol. 2009;336:76–83. doi: 10.1016/j.ydbio.2009.09.032. [PMC free article] [PubMed] [Cross Ref] 31. Van Keymeulen A, Mascre G, Youseff KK, Harel I, Michaux C, De Geest N, et al. Epidermal progenitors give rise to Merkel cells during embryonic development and adult homeostasis. J Cell Biol. 2009;187:91–100. doi: 10.1083/jcb.200907080. [PMC free article] [PubMed] [Cross Ref] 32. Ebara S, Kumamoto K, Baumann KI, Halata Z. Three-dimensional analyses of touch domes in the hairy skin of the cat paw reveal morphological substrates for complex sensory processing. Neurosci Res. 2008;61:159–71. doi: 10.1016/j.neures.2008.02.004. [PubMed] [Cross Ref] 33. Guinard D, Usson Y, Guillermet C, Saxod R. Merkel complexes of human digital skin: three-dimensional imaging with confocal laser microscopy and double immunofluorescence. J Comp Neurol. 1998;398:98–104. doi: 10.1002/(SICI)1096-9861(19980817)398:1<98::AID-CNE6>3.0.CO;2-4. [PubMed] [Cross Ref] 34. Reinisch CM, Tschachler E. The touch dome in human skin is supplied by different types of nerve fibers. Ann Neurol. 2005;58:88–95. doi: 10.1002/ana.20527. [PubMed] [Cross Ref] 35. Maricich SM, Morrison KM, Mathes EL, Brewer BM. Rodents rely on Merkel cells for texture discrimination tasks. J Neurosci. 2012;32:3296–300. doi: 10.1523/JNEUROSCI.5307-11.2012. [PMC free article] [PubMed] [Cross Ref] 36. Ikeda I, Yamashita Y, Ono T, Ogawa H. Selective phototoxic destruction of rat Merkel cells abolishes responses of slowly adapting type I mechanoreceptor units. J Physiol. 1994;479:247–56. [PMC free article] [PubMed] 37. Maricich SM, Wellnitz SA, Nelson AM, Lesniak DR, Gerling GJ, Lumpkin EA, et al. Merkel cells are essential for light-touch responses. Science. 2009;324:1580–2. doi: 10.1126/science.1172890. [PMC free article] [PubMed] [Cross Ref] 38. Diamond J, Holmes M, Nurse CA. Are Merkel cell-neurite reciprocal synapses involved in the initiation of tactile responses in salamander skin? J Physiol. 1986;376:101–20. [PMC free article] [PubMed] 39. Yamashita Y, Akaike N, Wakamori M, Ikeda I, Ogawa H. Voltage-dependent currents in isolated single Merkel cells of rats. J Physiol. 1992;450:143–62. [PMC free article] [PubMed] 40. Wellnitz SA, Lesniak DR, Gerling GJ, Lumpkin EA. The regularity of sustained firing reveals two populations of slowly adapting touch receptors in mouse hairy skin. J Neurophysiol. 2010;103:3378–88. doi: 10.1152/jn.00810.2009. [PMC free article] [PubMed] [Cross Ref] 41. Nandasena BG, Suzuki A, Aita M, Kawano Y, Nozawa-Inoue K, Maeda T. Immunolocalization of aquaporin-1 in the mechanoreceptive Ruffini endings in the periodontal ligament. Brain Res. 2007;1157:32–40. doi: 10.1016/j.brainres.2007.04.033. [PubMed] [Cross Ref] 42. Rahman F, Harada F, Saito I, Suzuki A, Kawano Y, Izumi K, et al. Detection of acid-sensing ion channel 3 (ASIC3) in periodontal Ruffini endings of mouse incisors. Neurosci Lett. 2011;488:173–7. doi: 10.1016/j.neulet.2010.11.023. [PubMed] [Cross Ref] 43. Johnson KO. The roles and functions of cutaneous mechanoreceptors. Curr Opin Neurobiol. 2001;11:455–61. doi: 10.1016/S0959-4388(00)00234-8. [Review] [PubMed] [Cross Ref] 44. Wende H, Lechner SG, Cheret C, Bourane S, Kolanczyk ME, Pattyn A, et al. The transcription factor c-Maf controls touch receptor development and function. Science. 2012;335:1373–6. doi: 10.1126/science.1214314. [PubMed] [Cross Ref] 45. Mendelson M, Lowenstein WR. Mechanisms of receptor adaptation. Science. 1964;144:554–5. doi: 10.1126/science.144.3618.554. [PubMed] [Cross Ref] 46. Loewenstein WR, Mendelson M. Components of receptor adaptation in a pacinian corpuscle. J Physiol. 1965;177:377–97. [PMC free article] [PubMed] 47. Pawson L, Prestia LT, Mahoney GK, Güçlü B, Cox PJ, Pack AK. GABAergic/glutamatergic-glial/neuronal interaction contributes to rapid adaptation in pacinian corpuscles. J Neurosci. 2009;29:2695–705. doi: 10.1523/JNEUROSCI.5974-08.2009. [PMC free article] [PubMed] [Cross Ref] 48. Basbaum AI, Jessell TM. The perception of pain. In: Kandel ER, Schwartz JH, Jessell TM, eds. Principles of neural science. Fourth edition. The McGraw-Hill compagies, 2000: 472-490. 49. Bourane S, Garces A, Venteo S, Pattyn A, Hubert T, Fichard A, et al. Low-threshold mechanoreceptor subtypes selectively express MafA and are specified by Ret signaling. Neuron. 2009;64:857–70. doi: 10.1016/j.neuron.2009.12.004. [PubMed] [Cross Ref] 50. Kramer I, Sigrist M, de Nooij JC, Taniuchi I, Jessell TM, Arber S. A role for Runx transcription factor signaling in dorsal root ganglion sensory neuron diversification. Neuron. 2006;49:379–93. doi: 10.1016/j.neuron.2006.01.008. [PubMed] [Cross Ref] 51. Luo W, Enomoto H, Rice FL, Milbrandt J, Ginty DD. Molecular identification of rapidly adapting mechanoreceptors and their developmental dependence on ret signaling. Neuron. 2009;64:841–56. doi: 10.1016/j.neuron.2009.11.003. [PMC free article] [PubMed] [Cross Ref] 52. Vallbo AB, Hagbarth KE. Activity from skin mechanoreceptors recorded percutaneously in awake human subjects. Exp Neurol. 1968;21:270–89. doi: 10.1016/0014-4886(68)90041-1. [PubMed] [Cross Ref] 53. Macefield VG. Physiological characteristics of low-threshold mechanoreceptors in joints, muscle and skin in human subjects. Clin Exp Pharmacol Physiol. 2005;32:135–44. doi: 10.1111/j.1440-1681.2005.04143.x. [Review] [PubMed] [Cross Ref] 54. Koizumi S, Fujishita K, Inoue K, Shigemoto-Mogami Y, Tsuda M, Inoue K. Ca2+ waves in keratinocytes are transmitted to sensory neurons: the involvement of extracellular ATP and P2Y2 receptor activation. Biochem J. 2004;380:329–38. doi: 10.1042/BJ20031089. [PMC free article] [PubMed] [Cross Ref] 55. Azorin N, Raoux M, Rodat-Despoix L, Merrot T, Delmas P, Crest M. ATP signalling is crucial for the response of human keratinocytes to mechanical stimulation by hypo-osmotic shock. Exp Dermatol. 2011;20:401–7. doi: 10.1111/j.1600-0625.2010.01219.x. [PubMed] [Cross Ref] 56. Amano M, Fukata Y, Kaibuchi K. Regulation and functions of Rho-associated kinase. Exp Cell Res. 2000;261:44–51. doi: 10.1006/excr.2000.5046. [Review] [PubMed] [Cross Ref] 57. Koyama T, Oike M, Ito Y. Involvement of Rho-kinase and tyrosine kinase in hypotonic stress-induced ATP release in bovine aortic endothelial cells. J Physiol. 2001;532:759–69. doi: 10.1111/j.1469-7793.2001.0759e.x. [PMC free article] [PubMed] [Cross Ref] 58. Perl ER. Cutaneous polymodal receptors: characteristics and plasticity. Prog Brain Res. 1996;113:21–37. doi: 10.1016/S0079-6123(08)61079-1. [Review] [PubMed] [Cross Ref] 59. McCarter GC, Reichling DB, Levine JD. Mechanical transduction by rat dorsal root ganglion neurons in vitro. Neurosci Lett. 1999;273:179–82. doi: 10.1016/S0304-3940(99)00665-5. [PubMed] [Cross Ref] 60. Drew LJ, Wood JN, Cesare P. Distinct mechanosensitive properties of capsaicin-sensitive and -insensitive sensory neurons. J Neurosci. 2002;22:RC228. [PubMed] 61. Drew LJ, Rohrer DK, Price MP, Blaver KE, Cockayne DA, Cesare P, et al. Acid-sensing ion channels ASIC2 and ASIC3 do not contribute to mechanically activated currents in mammalian sensory neurones. J Physiol. 2004;556:691–710. doi: 10.1113/jphysiol.2003.058693. [PMC free article] [PubMed] [Cross Ref] 62. McCarter GC, Levine JD. Ionic basis of a mechanotransduction current in adult rat dorsal root ganglion neurons. Mol Pain. 2006;2:28. doi: 10.1186/1744-8069-2-28. [PMC free article] [PubMed] [Cross Ref] 63. Coste B, Crest M, Delmas P. Pharmacological dissection and distribution of NaN/Nav1.9, T-type Ca2+ currents, and mechanically activated cation currents in different populations of DRG neurons. J Gen Physiol. 2007;129:57–77. doi: 10.1085/jgp.200609665. [PMC free article] [PubMed] [Cross Ref] 64. Hao J, Delmas P. Multiple desensitization mechanisms of mechanotransducer channels shape firing of mechanosensory neurons. J Neurosci. 2010;30:13384–95. doi: 10.1523/JNEUROSCI.2926-10.2010. [PubMed] [Cross Ref] 65. Drew LJ, Wood JN. FM1-43 is a permeant blocker of mechanosensitive ion channels in sensory neurons and inhibits behavioural responses to mechanical stimuli. Mol Pain. 2007;3:1. doi: 10.1186/1744-8069-3-1. [PMC free article] [PubMed] [Cross Ref] 66. Hao J, Delmas P. Recording of mechanosensitive currents using piezoelectrically driven mechanostimulator. Nat Protoc. 2011;6:979–90. doi: 10.1038/nprot.2011.343. [PubMed] [Cross Ref] 67. Rugiero F, Drew LJ, Wood JN. Kinetic properties of mechanically activated currents in spinal sensory neurons. J Physiol. 2010;588:301–14. doi: 10.1113/jphysiol.2009.182360. [PMC free article] [PubMed] [Cross Ref] 68. Hu J, Lewin GR. Mechanosensitive currents in the neurites of cultured mouse sensory neurones. J Physiol. 2006;577:815–28. doi: 10.1113/jphysiol.2006.117648. [PMC free article] [PubMed] [Cross Ref] 69. Bhattacharya MR, Bautista DM, Wu K, Haeberle H, Lumpkin EA, Julius D. Radial stretch reveals distinct populations of mechanosensitive mammalian somatosensory neurons. Proc Natl Acad Sci U S A. 2008;105:20015–20. doi: 10.1073/pnas.0810801105. [PMC free article] [PubMed] [Cross Ref] 70. Crawford AC, Evans MG, Fettiplace R. Activation and adaptation of transducer currents in turtle hair cells. J Physiol. 1989;419:405–34. [PMC free article] [PubMed] 71. Ricci AJ, Wu YC, Fettiplace R. The endogenous calcium buffer and the time course of transducer adaptation in auditory hair cells. J Neurosci. 1998;18:8261–77. [PubMed] 72. Vollrath MA, Kwan KY, Corey DP. The micromachinery of mechanotransduction in hair cells. Annu Rev Neurosci. 2007;30:339–65. doi: 10.1146/annurev.neuro.29.051605.112917. [Review] [PMC free article] [PubMed] [Cross Ref] 73. Goodman MB, Schwarz EM. Transducing touch in Caenorhabditis elegans. Annu Rev Physiol. 2003;65:429–52. doi: 10.1146/annurev.physiol.65.092101.142659. [Review] [PubMed] [Cross Ref] 74. Waldmann R, Lazdunski MH. H(+)-gated cation channels: neuronal acid sensors in the NaC/DEG family of ion channels. Curr Opin Neurobiol. 1998;8:418–24. doi: 10.1016/S0959-4388(98)80070-6. [Review] [PubMed] [Cross Ref] 75. Page AJ, Brierley SM, Martin CM, Martinez-Salgado C, Wemmie JA, Brennan TJ, et al. The ion channel ASIC1 contributes to visceral but not cutaneous mechanoreceptor function. Gastroenterology. 2004;127:1739–47. doi: 10.1053/j.gastro.2004.08.061. [PubMed] [Cross Ref] 76. Price MP, McIlwrath SL, Xie J, Cheng C, Qiao J, Tarr DE, et al. The DRASIC cation channel contributes to the detection of cutaneous touch and acid stimuli in mice. Neuron. 2001;32:1071–83. doi: 10.1016/S0896-6273(01)00547-5. [Erratum in: Neuron 2002 Jul 18;35] [PubMed] [Cross Ref] 77. Roza C, Puel JL, Kress M, Baron A, Diochot S, Lazdunski M, et al. Knockout of the ASIC2 channel in mice does not impair cutaneous mechanosensation, visceral mechanonociception and hearing. J Physiol. 2004;558:659–69. doi: 10.1113/jphysiol.2004.066001. [PMC free article] [PubMed] [Cross Ref] 78. Damann N, Voets T, Nilius B. TRPs in our senses. Curr Biol. 2008;18:R880–9. doi: 10.1016/j.cub.2008.07.063. [Review] [PubMed] [Cross Ref] 79. Christensen AP, Corey DP. TRP channels in mechanosensation: direct or indirect activation? Nat Rev Neurosci. 2007;8:510–21. doi: 10.1038/nrn2149. [Review] [PubMed] [Cross Ref] 80. Liedtke W, Tobin DM, Bargmann CI, Friedman JM. Mammalian TRPV4 (VR-OAC) directs behavioral responses to osmotic and mechanical stimuli in Caenorhabditis elegans. Proc Natl Acad Sci U S A. 2003;100(Suppl 2):14531–6. doi: 10.1073/pnas.2235619100. [PMC free article] [PubMed] [Cross Ref] 81. Suzuki M, Mizuno A, Kodaira K, Imai M. Impaired pressure sensation in mice lacking TRPV4. J Biol Chem. 2003;278:22664–8. doi: 10.1074/jbc.M302561200. [PubMed] [Cross Ref] 82. Liedtke W, Choe Y, Martí-Renom MA, Bell AM, Denis CS, Sali A, et al. Vanilloid receptor-related osmotically activated channel (VR-OAC), a candidate vertebrate osmoreceptor. Cell. 2000;103:525–35. doi: 10.1016/S0092-8674(00)00143-4. [PMC free article] [PubMed] [Cross Ref] 83. Alessandri-Haber N, Dina OA, Yeh JJ, Parada CA, Reichling DB, Levine JD. Transient receptor potential vanilloid 4 is essential in chemotherapy-induced neuropathic pain in the rat. J Neurosci. 2004;24:4444–52. doi: 10.1523/JNEUROSCI.0242-04.2004. [Erratum in: J Neurosci. 2004 Jun;24] [PubMed] [Cross Ref] 84. Bautista DM, Jordt SE, Nikai T, Tsuruda PR, Read AJ, Poblete J, et al. TRPA1 mediates the inflammatory actions of environmental irritants and proalgesic agents. Cell. 2006;124:1269–82. doi: 10.1016/j.cell.2006.02.023. [PubMed] [Cross Ref] 85. Kwan KY, Allchorne AJ, Vollrath MA, Christensen AP, Zhang DS, Woolf CJ, et al. TRPA1 contributes to cold, mechanical, and chemical nociception but is not essential for hair-cell transduction. Neuron. 2006;50:277–89. doi: 10.1016/j.neuron.2006.03.042. [PubMed] [Cross Ref] 86. Coste B, Mathur J, Schmidt M, Earley TJ, Ranade S, Petrus MJ, et al. Piezo1 and Piezo2 are essential components of distinct mechanically activated cation channels. Science. 2010;330:55–60. doi: 10.1126/science.1193270. [PMC free article] [PubMed] [Cross Ref] 87. Coste B, Xiao B, Santos JS, Syeda R, Grandl J, Spencer KS, et al. Piezo proteins are pore-forming subunits of mechanically activated channels. Nature. 2012;483:176–81. doi: 10.1038/nature10812. [PMC free article] [PubMed] [Cross Ref] 88. Bae C, Sachs F, Gottlieb PA. The mechanosensitive ion channel Piezo1 is inhibited by the peptide GsMTx4. Biochemistry. 2011;50:6295–300. doi: 10.1021/bi200770q. [PMC free article] [PubMed] [Cross Ref] 89. Kim SE, Coste B, Chadha A, Cook B, Patapoutian A. The role of Drosophila Piezo in mechanical nociception. Nature. 2012;483:209–12. doi: 10.1038/nature10801. [PMC free article] [PubMed] [Cross Ref] 90. Zarychanski R, Schulz VP, Houston BL, Maksimova Y, Houston DS, Smith B, et al. Mutations in the mechanotransduction protein PIEZO1 are associated with hereditary xerocytosis. Blood. 2012;120:1908–15. doi: 10.1182/blood-2012-04-422253. [PMC free article] [PubMed] [Cross Ref] 91. Kawashima Y, Géléoc GS, Kurima K, Labay V, Lelli A, Asai Y, et al. Mechanotransduction in mouse inner ear hair cells requires transmembrane channel-like genes. J Clin Invest. 2011;121:4796–809. doi: 10.1172/JCI60405. [PMC free article] [PubMed] [Cross Ref] 92. Tlili A, Rebeh IB, Aifa-Hmani M, Dhouib H, Moalla J, Tlili-Chouchène J, et al. TMC1 but not TMC2 is responsible for autosomal recessive nonsyndromic hearing impairment in Tunisian families. Audiol Neurootol. 2008;13:213–8. doi: 10.1159/000115430. [PubMed] [Cross Ref] 93. Manji SS, Miller KA, Williams LH, Dahl HH. Identification of three novel hearing loss mouse strains with mutations in the Tmc1 gene. Am J Pathol. 2012;180:1560–9. doi: 10.1016/j.ajpath.2011.12.034. [PubMed] [Cross Ref] 94. Wetzel C, Hu J, Riethmacher D, Benckendorff A, Harder L, Eilers A, et al. A stomatin-domain protein essential for touch sensation in the mouse. Nature. 2007;445:206–9. doi: 10.1038/nature05394. [PubMed] [Cross Ref] 95. Martinez-Salgado C, Benckendorff AG, Chiang LY, Wang R, Milenkovic N, Wetzel C, et al. Stomatin and sensory neuron mechanotransduction. J Neurophysiol. 2007;98:3802–8. doi: 10.1152/jn.00860.2007. [PubMed] [Cross Ref] 96. Huang M, Gu G, Ferguson EL, Chalfie M. A stomatin-like protein necessary for mechanosensation in C. elegans. Nature. 1995;378:292–5. doi: 10.1038/378292a0. [PubMed] [Cross Ref] 97. Hu J, Chiang LY, Koch M, Lewin GR. Evidence for a protein tether involved in somatic touch. EMBO J. 2010;29:855–67. doi: 10.1038/emboj.2009.398. [PMC free article] [PubMed] [Cross Ref] 98. Chiang LY, Poole K, Oliveira BE, Duarte N, Sierra YA, Bruckner-Tuderman L, et al. Laminin-332 coordinates mechanotransduction and growth cone bifurcation in sensory neurons. Nat Neurosci. 2011;14:993–1000. doi: 10.1038/nn.2873. [PubMed] [Cross Ref] 99. Lesage F, Guillemare E, Fink M, Duprat F, Lazdunski M, Romey G, et al. TWIK-1, a ubiquitous human weakly inward rectifying K+ channel with a novel structure. EMBO J. 1996;15:1004–11. [PMC free article] [PubMed] 100. Lesage F. Pharmacology of neuronal background potassium channels. Neuropharmacology. 2003;44:1–7. doi: 10.1016/S0028-3908(02)00339-8. [Review] [PubMed] [Cross Ref] 101. Medhurst AD, Rennie G, Chapman CG, Meadows H, Duckworth MD, Kelsell RE, et al. Distribution analysis of human two pore domain potassium channels in tissues of the central nervous system and periphery. Brain Res Mol Brain Res. 2001;86:101–14. doi: 10.1016/S0169-328X(00)00263-1. [PubMed] [Cross Ref] 102. Maingret F, Patel AJ, Lesage F, Lazdunski M, Honoré E. Mechano- or acid stimulation, two interactive modes of activation of the TREK-1 potassium channel. J Biol Chem. 1999;274:26691–6. doi: 10.1074/jbc.274.38.26691. [PubMed] [Cross Ref] 103. Maingret F, Fosset M, Lesage F, Lazdunski M, Honoré E. TRAAK is a mammalian neuronal mechano-gated K+ channel. J Biol Chem. 1999;274:1381–7. doi: 10.1074/jbc.274.3.1381. [PubMed] [Cross Ref] 104. Alloui A, Zimmermann K, Mamet J, Duprat F, Noël J, Chemin J, et al. TREK-1, a K+ channel involved in polymodal pain perception. EMBO J. 2006;25:2368–76. doi: 10.1038/sj.emboj.7601116. [PMC free article] [PubMed] [Cross Ref] 105. Noël J, Zimmermann K, Busserolles J, Deval E, Alloui A, Diochot S, et al. The mechano-activated K+ channels TRAAK and TREK-1 control both warm and cold perception. EMBO J. 2009;28:1308–18. doi: 10.1038/emboj.2009.57. [PMC free article] [PubMed] [Cross Ref] 106. Dobler T, Springauf A, Tovornik S, Weber M, Schmitt A, Sedlmeier R, et al. TRESK two-pore-domain K+ channels constitute a significant component of background potassium currents in murine dorsal root ganglion neurones. J Physiol. 2007;585:867–79. doi: 10.1113/jphysiol.2007.145649. [PMC free article] [PubMed] [Cross Ref] 107. Bautista DM, Sigal YM, Milstein AD, Garrison JL, Zorn JA, Tsuruda PR, et al. Pungent agents from Szechuan peppers excite sensory neurons by inhibiting two-pore potassium channels. Nat Neurosci. 2008;11:772–9. doi: 10.1038/nn.2143. [PMC free article] [PubMed] [Cross Ref] 108. Lennertz RC, Tsunozaki M, Bautista DM, Stucky CL. Physiological basis of tingling paresthesia evoked by hydroxy-alpha-sanshool. J Neurosci. 2010;30:4353–61. doi: 10.1523/JNEUROSCI.4666-09.2010. [PMC free article] [PubMed] [Cross Ref] 109. Heidenreich M, Lechner SG, Vardanyan V, Wetzel C, Cremers CW, De Leenheer EM, et al. KCNQ4 K(+) channels tune mechanoreceptors for normal touch sensation in mouse and man. Nat Neurosci. 2012;15:138–45. doi: 10.1038/nn.2985. [PubMed] [Cross Ref] 110. Frenzel H, Bohlender J, Pinsker K, Wohlleben B, Tank J, Lechner SG, et al. A genetic basis for mechanosensory traits in humans. PLoS Biol. 2012;10:e1001318. doi: 10.1371/journal.pbio.1001318. [PMC free article] [PubMed] [Cross Ref] 111. Delmas P, Hao J, Rodat-Despoix L. Molecular mechanisms of mechanotransduction in mammalian sensory neurons. Nat Rev Neurosci. 2011;12:139–53. doi: 10.1038/nrn2993. [PubMed] [Cross Ref] Back pain is one of the most prevalent causes for disability and missed days at work worldwide. As a matter of fact, back pain has been attributed as the second most common reason for doctor office visits, outnumbered only by upper-respiratory infections. Approximately 80 percent of the population will experience some type of back pain at least once throughout their life. The spine is a complex structure made up of bones, joints, ligaments and muscles, among other soft tissues. Because of this, injuries and/or aggravated conditions, such as herniated discs, can eventually lead to symptoms of back pain. Sports injuries or automobile accident injuries are often the most frequent cause of back pain, however, sometimes the simplest of movements can have painful results. Fortunately, alternative treatment options, such as chiropractic care, can help ease back pain through the use of spinal adjustments and manual manipulations, ultimately improving pain relief. The information herein on "Structural and Functional Mechanisms of Mechanoreceptors" is not intended to replace a one-on-one relationship with a qualified health care professional, or licensed physician, and is not medical advice. We encourage you to make your own healthcare decisions based on your research and partnership with a qualified healthcare professional. Our information scope is limited to Chiropractic, musculoskeletal, physical medicines, wellness, contributing etiological viscerosomatic disturbances within clinical presentations, associated somatovisceral reflex clinical dynamics, subluxation complexes, sensitive health issues, and/or functional medicine articles, topics, and discussions. We provide and present clinical collaboration with specialists from a wide array of disciplines. Each specialist is governed by their professional scope of practice and their jurisdiction of licensure. We use functional health & wellness protocols to treat and support care for the injuries or disorders of the musculoskeletal system. Our videos, posts, topics, subjects, and insights cover clinical matters, issues, and topics that relate to and support, directly or indirectly, our clinical scope of practice.* Our office has made a reasonable attempt to provide supportive citations and has identified the relevant research study or studies supporting our posts. We provide copies of supporting research studies available to regulatory boards and the public upon request. We understand that we cover matters that require an additional explanation of how it may assist in a particular care plan or treatment protocol; therefore, to further discuss the subject matter above, please feel free to ask Dr. Alex Jimenez DC or contact us at 915-850-0900. We are here to help you and your family. Dr. Alex Jimenez DC, MSACP, CIFM*, IFMCP*, ATN*, CCST My Digital Business Card Bicycle riding is a form of transportation and a popular leisure and exercise activity. It… Read More https://www.youtube.com/shorts/SaZ1lVPXN_Q Introduction When everyday factors affect how many of us function, our back muscles begin… Read More The NHTSA records show that rear-end collisions are the most common and make up 30%… Read More https://www.youtube.com/shorts/xQDvSioYJco Introduction The body is an amazingly complex machine as it allows the individual to… Read More Arteries carry blood from the heart to the rest of the body. The veins transport… Read More https://www.youtube.com/shorts/V9vXZ-vswlI Introduction Nowadays, many individuals are incorporating various fruits, vegetables, lean portions of meat, and… Read More
A Poisson process, or Poisson point process, describes a process where certain events occur at a constant rate, but at random and independently of each other. A poisson distribution is a discrete probability distribution that measures the probability of a certain number of events occurring within a specified period of time, given that these events occur at a constant average rate and independently of the previous event. What Is a Poisson Distribution and a Poisson Process? A Poisson distribution model helps find the probability of a given number of events in a time period, or the probability of waiting time until the next event in a Poisson process (where certain events occur randomly and independently but at a continuous rate). What Is a Poisson Process? A Poisson process is a model for a series of discrete events where the average time between events is known, but the exact timing of events is random. The arrival of an event is independent of the event before (waiting time between events is memoryless). For example, suppose we own a website that our content delivery network (CDN) tells us goes down on average once per 60 days, but one failure doesn’t affect the probability of the next. All we know is the average time between failures. The failures are a Poisson process that looks like: We know the average time between events, but the events are randomly spaced in time (stochastic). We might have back-to-back failures, but we could also go years between failures because the process is stochastic. A Poisson process meets the following criteria (in reality, many phenomena modeled as Poisson processes don’t precisely match these but can be approximated as such): Poisson Process Criteria - Events are independent of each other. The occurrence of one event does not affect the probability another event will occur. - The average rate (events per time period) is constant. - Two events cannot occur at the same time. The last point — events are not simultaneous — means we can think of each sub-interval in a Poisson process as a Bernoulli Trial, that is, either a success or a failure. With our website, the entire interval in consideration is 60 days, but each with sub-interval (one day) our website either goes down or it doesn’t. Common examples of Poisson processes are customers calling a help center, visitors to a website, radioactive decay in atoms, photons arriving at a space telescope and movements in a stock price. Poisson processes are generally associated with time, but they don’t have to be. In the case of stock prices, we might know the average movements per day (events per time), but we could also have a Poisson process for the number of trees in an acre (events per area). One example of a Poisson process we often see is bus arrivals (or trains). However, this isn’t a proper Poisson process because the arrivals aren’t independent of one another. Even for bus systems that run on time, a late arrival from one bus can impact the next bus’s arrival time. Jake VanderPlas has a great article on applying a Poisson process to bus arrival times which works better with made-up data than real-world data. What Is a Poisson Distribution? The Poisson distribution and its formula helps find the probability of a given number of events in a time period, or find the probability of waiting some time until the next event. As a Poisson process is a model we use for describing randomly occurring events (which by itself isn’t that useful), Poisson distribution helps to make sense of the Poisson process model. The Poisson distribution probability mass function (pmf) gives the probability of observing k events in a time period given the length of the period and the average events per time. We can use the Poisson distribution pmf to find the probability of observing a number of events over an interval generated by a Poisson process. Another use of the mass function equation (as we’ll see later) is to find the probability of waiting a given amount of time between events. Poisson Distribution Formula The Poisson distribution formula, which helps determine the pmf, is as follows: The pmf is a little convoluted, and we can simplify events/time * time period into a single parameter, lambda (λ), the rate parameter. With this substitution, the Poisson Distribution probability function now has one parameter: In a Poisson distribution formula: - k is the number of events that occurred in a given time period or interval - k! is the factorial of k - e is Euler’s number (≈ 2.71828) - λ is the expected number of events in the given time period or interval - P(k) is the probability that an event will occur k times Rate Parameter and Poisson Distribution As for lambda, or λ, we can think of this as the rate parameter or expected number of events in the interval. (We’ll switch to calling this an interval because, remember, the Poisson process doesn’t always use a time period). I like to write out lambda to remind myself the rate parameter is a function of both the average events per time and the length of the time period, but you’ll most commonly see it as above. (The discrete nature of the Poisson distribution is why this is a probability mass function and not a density function.) As we change the rate parameter, λ, we change the probability of seeing different numbers of events in one interval. The graph below is the probability mass function of the Poisson distribution and shows the probability (y-axis) of a number of events (x-axis) occurring in one interval with different rate parameters. The most likely number of events in one interval for each curve is the curve’s rate parameter. This makes sense because the rate parameter is the expected number of events in one interval. Therefore, the rate parameter represents the number of events with the greatest probability when the rate parameter is an integer. When the rate parameter is not an integer, the highest probability number of events will be the nearest integer to the rate parameter. (The rate parameter is also the mean and variance of the distribution, which don’t need to be integers.) Poisson Distribution Use Cases Predicting Website Visits Using the Poisson distribution, we could model the probability of seeing a certain amount of website visits in one day. For example, let’s say in one day, a given website is visited 10 times. From here, the Poisson distribution formula could determine how probable it is for the website to receive one visit, or possibly 100 visits, within another day’s period. Predicting Hotel Bookings The Poisson distribution can also be used to measure the probability of having a specific number of hotel bookings in one week. By observing 100 guests book rooms at a given hotel during a period of one week, this can then help predict the probability of getting 50, 75 or another amount of bookings at that same hotel in a week. Predicting the Sales of a Product Poisson distribution can also help provide the probability of how many of a certain product will be sold within one month. Let’s use a new smartphone model as an example. This smartphone model was sold 10,000 times in one month — so how probable is it that the model will sell 5,000 times in one month? Or maybe 20,000 times? The Poisson distribution formula could be applied here. Poisson Distribution Example: Meteor Showers We could continue with website failures to illustrate a problem solvable with a Poisson distribution, but I propose something grander. When I was a child, my father would sometimes take me into our yard to observe (or try to observe) meteor showers. We weren’t space geeks, but watching objects from outer space burn up in the sky was enough to get us outside, even though meteor showers always seemed to occur in the coldest months. We can model the number of meteors seen as a Poisson distribution because the meteors are independent, the average number of meteors per hour is constant (in the short term), and — this is an approximation — meteors don’t occur at the same time. All we need to characterize the Poisson distribution is the rate parameter, the number of events per interval * interval length. In a typical meteor shower, we can expect five meteors per hour on average or one every 12 minutes. Due to the limited patience of a young child (especially on a freezing night), we never stayed out more than 60 minutes, so we’ll use that as the time period. From these values, we get: Five meteors expected mean that is the most likely number of meteors we’d observe in an hour. According to my pessimistic dad, that meant we’d see three meteors in an hour, tops. To test his prediction against the model, we can use the Poisson pmf distribution to find the probability of seeing exactly three meteors in one hour: We get 14 percent or about 1/7. If we went outside and observed for one hour every night for a week, then we could expect my dad to be right once! We can use other values in the equation to get the probability of different numbers of events and construct the pmf distribution. Doing this by hand is tedious, so we’ll use Python calculation and visualization (which you can see in this Jupyter Notebook). The graph below shows the probability mass function for the number of meteors in an hour with an average of 12 minutes between meteors, the rate parameter (which is the same as saying five meteors expected in an hour). The most likely number of meteors is five, the rate parameter of the distribution. (Due to a quirk of the numbers, four and five have the same probability, 18 percent). There is one most likely value as with any distribution, but there is also a wide range of possible values. For example, we could see zero meteors or see more than 10 in one hour. To find the probabilities of these events, we use the same equation but, this time, calculate sums of probabilities (see notebook for details). We already calculated the chance of seeing precisely three meteors as about 14 percent. The chance of seeing three or fewer meteors in one hour is 27 percent which means the probability of seeing more than 3 is 73 percent. Likewise, the probability of more than five meteors is 38.4 percent, while we could expect to see five or fewer meteors in 61.6 percent of hours. Although it’s small, there is a 1.4 percent chance of observing more than ten meteors in an hour! To visualize these possible scenarios, we can run an experiment by having our sister record the number of meteors she sees every hour for 10,000 hours. The results are in the histogram below: (This is just a simulation. No sisters were harmed in the making of this article.) On a few lucky nights, we’d see 10 or more meteors in an hour, although more often, we’d see four or five meteors. Experimenting With the Poisson Distribution Rate Parameter The rate parameter, λ, is the only number we need to define the Poisson distribution. However, since it’s a product of two parts (events/interval * interval length), there are two ways to change it: we can increase or decrease the events/interval, and we can increase or decrease the interval length. First, let’s change the rate parameter by increasing or decreasing the number of meteors per hour to see how those shifts affect the distribution. For this graph, we’re keeping the time period constant at 60 minutes. In each case, the most likely number of meteors in one hour is the expected number of meteors, the rate parameter. For example, at 12 meteors per hour (MPH), our rate parameter is 12, and there’s an 11 percent chance of observing exactly 12 meteors in one hour. If our rate parameter increases, we should expect to see more meteors per hour. Another option is to increase or decrease the interval length. Here’s the same plot, but this time we’re keeping the number of meteors per hour constant at five and changing the length of time we observe. It’s no surprise that we expect to see more meteors the longer we stay out. Using Poisson Distribution to Determine Poisson Process Waiting Time An intriguing part of a Poisson process involves figuring out how long we have to wait until the next event (sometimes called the interarrival time). Consider the situation: meteors appear once every 12 minutes on average. How long can we expect to wait to see the next meteor if we arrive at a random time? My dad always (this time optimistically) claimed we only had to wait six minutes for the first meteor, which agrees with our intuition. Let’s use statistics and parts of the Poisson distribution formula to see if our intuition is correct. I won’t go into the derivation (it comes from the probability mass function equation), but the time we can expect to wait between events is a decaying exponential. The probability of waiting a given amount of time between successive events decreases exponentially as time increases. The following equation shows the probability of waiting more than a specified time. With our example, we have one event per 12 minutes, and if we plug in the numbers, we get a 60.65 percent chance of waiting more than six minutes. So much for my dad’s guess! We can expect to wait more than 30 minutes, about 8.2 percent of the time. (Note this is the time between each successive pair of events. The waiting times between events are memoryless, so the time between two events has no effect on the time between any other events. This memorylessness is also known as the Markov property). A graph helps us to visualize the exponentially decaying probability of waiting time: There is a 100 percent chance of waiting more than zero minutes, which drops off to a near-zero percent chance of waiting more than 80 minutes. Again, as this is a distribution, there’s a wide range of possible interarrival times. Rearranging the equation, we can use it to find the probability of waiting less than or equal to a time: We can expect to wait six minutes or less to see a meteor 39.4 percent of the time. We can also find the probability of waiting a length of time: There’s a 57.72 percent probability of waiting between 5 and 30 minutes to see the next meteor. To visualize the distribution of waiting times, we can once again run a (simulated) experiment. We simulate watching for 100,000 minutes with an average rate of one meteor per 12 minutes. Then we find the waiting time between each meteor we see and plot the distribution. The most likely waiting time is one minute, but that’s distinct from the average waiting time. Let’s try to answer the question: On average, how long can we expect to wait between meteor observations? To answer the average waiting time question, we’ll run 10,000 separate trials, each time watching the sky for 100,000 minutes, and record the time between each meteor. The graph below shows the distribution of the average waiting time between meteors from these trials: The average of the 10,000 runs is 12.003 minutes. Surprisingly, this average is also the average waiting time to see the first meteor if we arrive at a random time. At first, this may seem counterintuitive: if events occur on average every 12 minutes, then why do we have to wait the entire 12 minutes before seeing one event? The answer is we are calculating an average waiting time, taking into account all possible situations. If the meteors came precisely every 12 minutes with no randomness in arrivals, then the average time we’d have to wait to see the first one would be six minutes. However, because waiting time is an exponential distribution, sometimes we show up and have to wait an hour, which outweighs the more frequent times when we wait fewer than 12 minutes. The average time to see the first meteor averaged over all the occurrences will be the same as the average time between events. The average first event waiting time in a Poisson process is known as the Waiting Time Paradox. As a final visualization, let’s do a random simulation of one hour of observation. Well, this time we got precisely the result we expected: five meteors. We had to wait 15 minutes for the first one then 12 minutes for the next. In this case, it’d be worth going out of the house for celestial observation! The next time you find yourself losing focus in statistics, you have my permission to stop paying attention to the teacher. Instead, find an interesting problem and solve it using the statistics you’re trying to learn. Applying technical concepts helps you learn the material and better appreciate how stats help us understand the world. Above all, stay curious: There are many amazing phenomena in the world, and data science is an excellent tool for exploring them. Frequently Asked Questions How do you know when to use a Poisson distribution? You can use a Poisson distribution when you need to find the probability of a number of events happening within a given interval of time or space. These events must be occurring at random, independently of each other and at a constant average rate to be applicable for a Poisson distribution. What is the criteria for a Poisson process? For a process of events to be a Poisson process, these events must occur at a constant average rate, independently of each other and with no two events occurring at the same time.
How to Calculate Momentum and Energy A. Momentum. Overview and Definition The reason why the term momentum is so often used in sports is that it conveys a sense of movement that requires a real concerted effort to stop, for example the Denver Broncos have true momentum this season; it looks as if they will go all the way to the Super Bowl. But I bet you did not know that momentum originated as a physics term. Specifically, it refers to the quantity of motion an object possesses. Akin to a sports team on the move, an object that is in motion has momentum. To further expound on this theme, momentum can be defined as mass in motion. This is true because all objects have mass, so if an object is in motion then it has momentum, and if it has momentum, it is mass in motion. How much momentum an object has is dependent upon two variables: - The amount of mass that is moving. - The speed at which the mass is moving. This equation expresses the central theorem of physics, that momentum is directly proportional to an object's mass and directly proportional to the object's velocity. The standard metric unit of momentum is the kgxm/s. Momentum is a vector quantity. Essentially, a vector quantity is one that can be fully expressed by both magnitude and direction. To accurately and completely describe the momentum of a 10 kg bowling ball moving westward at 2 m/s, you must include information about both the magnitude of the ball and the direction in which it is headed. It is not enough to say that the ball has 20 kgxm/s of momentum because this does not fully explain the momentum by which the ball is rolling. The direction of the momentum vector is the same as the direction of the velocity of the ball. The direction of the velocity vector is the same as the direction in which an object is moving. If the bowling ball is moving westward, then its momentum can be fully described by saying that it is 20 kgxm/s, westward. As a vector quantity, the momentum of an object needs to be fully described by using both magnitude and direction descriptions. From the definition of momentum, we gain the understanding that should an object be gigantic in size or have tremendous mass then it inevitably will also have a notable momentum. This is because both variables (mass and size) are of equal importance when attempting to determine the momentum of an object. Consider a Hummer vehicle and a person rollerblading moving down the street at the same speed. The considerably greater mass of the Hummer gives it a considerably greater momentum. Yet if the Hummer were at rest, then the momentum of the person rollerblading, albeit less weighty and massive, would exceed that of the Hummer, since the momentum of any object that is at rest is 0. When an object is at rest, it has zero momentum and zero mass in motion. So, once again, both variables, mass and velocity, are of equal importance when comparing the momentum of two objects. B. Momentum and Impulse Connection In the previous sports analogy, we noted that once a team has momentum, it may be hard to stop. This analogy can be extended to objects in general. To stop an object with momentum, it is necessary to apply a force against its motion (or momentum) for a set amount of time. The more momentum an object has, the harder it is going to be to stop. Bringing an object with momentum to a halt may require either a greater applied force or a longer duration of applied force (or a combination of both). When the force exerts pressure upon the object, this changes the object's velocity, which in turn alters the object's momentum. This concept can be applied to a number of everyday situations. Imagine driving down a residential street at a speed between 20 and 30 mph, because you are in a school zone. Out of nowhere, a ball rolls out in front of your car followed by a young boy chasing after it. Fortunately, you are able to stop before hitting him because you weren't going very fast to begin with. However, imagine if you were on the expressway going between 65 and 70 mph and the same situation were to occur; it would prove much more difficult for you to stop because you were traveling at a significantly greater speed. In both scenarios, the brakes of your car are the force that works against your momentum, but in the first case, you have less slowing down to do (and probably more time) than you would in the second situation. The rule that applies is as follows: force acting in resistance for a given amount of time has the ability to change an object's momentum. In other words, an unbalanced force can either accelerate or decelerate an object. If the force acts opposite the object's motion, it will inevitably slow the object down. If a force acts in the same direction as the object's motion, then the force is likely to increase the object's speed (velocity). In either case, a force has the capability to alter the velocity of an object. And the by-product of a change in an object's velocity is a change in the object's momentum. This concept is consistent with Isaac Newton's second law. As Newton expressed in his second law (Fnet =mxa), the acceleration of an object is directly proportional to the net force acting upon the object and inversely proportional to the mass of the object. When combined with the definition of acceleration (a = change in velocity/time), the following equalities result: When both sides of the above equation are multiplied by the quantity known as "t", the result is a new equation. C. Impulse or Change in Momentum When discussing momentum, a subject that tends to arise is that of collisions. Because the physics of collisions are governed by the laws of momentum, we know that there must be a law dictating the activities in an impulse-momentum change. Hence, the law that covers this area is as follows. In a collision, an object experiences a force for a specific amount of time, which results in a change in momentum (the object's mass may either speed up or slow down). Impulse will be produced because of force applied to an object over a period of time. i =FΔt ... (1) i = Δp =mΔv ...(2) Therefore, in a collision, objects experience an impulse; the impulse causes (and is equal to) the change in momentum. Consider a bumper car interacting with numerous other bumper cars. Initially, there is an impact and then a jolt backwards. On account of the collision, the bumper car loses its momentum or ability to go forward. D. Energy. Overview and Definition Energy is a very useful, albeit abstract, substance. Within physics, one of the most important concepts to internalize and utilize in all areas of the science is that the total energy of any closed physical system always remains constant. This is what is known as the "Law of Conservation of Energy." Energy is measured in units of mass times velocity squared, or E = mc2. The common units of energy are the Joule and erg, as well as the Btu, calorie, and kilowatt hour. In physics, the central measurement quantity is known as "work" (the product of applied force over a distance). It is comprised of units of energy. Quite notably, proving that heat is a form of energy was one of the most pivotal developments in classical physics and thermodynamics. Energy is denoted by the product of real power and the length of time. The two most common types of energy are kinetic energy (motion defined by mxv, where'm is the mass and v is the velocity) and potential energy (stored energy). Potential energy has the ability to take on a range of different forms, including gravitational potential energy (in which G is the universal gravitational constant, M and m' are the masses of two interacting particles, r is the distance between the masses), and electric potential energy (where Q is the charge and V is the voltage). E. Environmental Sources of Energy A store of energy is called a fuel. A high level of energy is stored in the following three fuels: coal, natural gas, and oil. Hence, these are the three most widely used and coveted energy sources in the world. Unfortunately, when these fuels are burned in chemical or nuclear reactions to release their intrinsic energy, the original fuel mass is depleted and cannot be replenished without exerting more energy than the amount that would be obtained. Laws of physics dictate that energy can change from one form to another and even into matter. Though there are various forms of energy, as we have previously mentioned, for example kinetic energy (motion), potential energy (stores), mechanical energy, and nuclear energy, we cannot draw energy to turn on the television or generate light from these sources. This is where electricity comes into the picture. At present, electricity is the primary form of energy consumed by the world's population. Because of this mass consumption, power plants have been created to convert heat from burning biomasses and kinetic energy from falling water into electricity so that energy can flow through wires and into our homes. - Light as a Particle or a Wave - The Physics Behind Electromagnetic Waves - Understanding the Quantum Nature of Light - What is Thermodynamics? - What is Electromagnetism? - How to Calculate Angles and Parallelism - How to Apply the Paired Two-Sample Student's t-Test to Determine if Two Samples have Statistically Different Means - How to Calculate Probabilities for Normally Distributed Data - Solving Geometry Problems Involving Circles - What is Probability in Statistics? - Applying Algebra to Statistics and Probability - Understanding Regression Analysis - Applied Statistics: One-Way ANOVA - Basic Statistics Terminology - Solving Algebraic Functions
Finding the Gradient of a Curve at a Point The gradient of a graph often represents an important physical quantity. The gradient of a distance time graph is the velocity, and the gradient of a graph of potential energy against distance is the negative of the force acting on a body. Actually, to be completely accurate, the gradient at a point on a distance time graph is the velocity at that time (the time will be read off the horizontal axis), and the force on a body is the negative gradient of the potential energy plotted against distance at a point. To find the gradient of a graph at a point draw a tangent to the graph and construct a right angled triangle with the base parallel to the horizontal axis and the height parallel to the vertical axis. The gradient will then be the height dividing by the base, with the proviso that the gradient is positive if the graph is going up from left to right and negative if the graph is going down from left to right. The height of the triangle above is about 3 -0.8=2.2 and the base is about 6.5-0.1 =6.4 so the gradient is(positive since the graph is going up from left to right).
The net electrostatic force is the force existing when any charge or a particle goes against each other when each vector of each electric force of those respective charges is added together. The attractive or repulsive force between any two charged bodies due to the presence of electric charges brings the concept of electrostatic force into action. In classical physics, any material, when rubbed against each other, attracts particles that are lightweight, known as electrons. The force exerted by these particles is known as electrostatic and is mainly described by Coulomb’s law. In simple words, the electrostatic force is the one that exists between charges. Static is means that the charges are not moving fast. Now that we have an idea about the electric force between these static charges let us go further in more detail about the phenomena. To start with, the electrostatic force is otherwise called as Coulomb force. This is the force exerted by one charge on another when separated by a distance. The equation for this force is F = E/q, where E is the electric field. The total vector force is added together gives the net electrostatic force. For better understanding, we take few examples of daily life, - While a piece of paper is rubbed over oily hair with the help of a comb, it produces electrostatic force. - When one balloon is rubbed over another balloon in which one of the balloons is rubbed with hair, the electrostatic force is produced. One point to remember is that the electrostatic force is basically a non-contact force; there exists zero contact with an object which is either pulled away or pushed against each other. Net electrostatic formula The net electrostatic force formula is F = (k q1 q2 )/r2. k = proportionality constant q1, q2= charges in contact (charges can be either – + or +- or – – or + +) r = distance separating the charges This is the basic formula to evaluate electrostatic force. This formula gives the magnitude of the net electrostatic force. The direction of the net electrostatic force is given by ϴ = tan-1 (Fx/Fy) Here the net electrostatic force on the charges is calculated by adding the individual vector forces that exert the same force on the other charge, that is, the force exerted by q1 on q2 and force exerted by charge q2 on the q1. Using the above formula, any kind of electrostatic force can be calculated, and when there are two or more charges, the formula will change accordingly. For example, when there are three charges, the net electrostatic force is given as F12 = (k q1 q2 )/r12; F13 = (k q1 q3)/ r13 Net electrostatic force problem Let a system consist of two charges, q1=20 μC; q2=-30 μC separated by distance. Now calculate the net electrostatic force. F= (k q1 q2)/r2 F= (9 x 109 x 20 x 10-6 x 30 x 10-6) / (10 x10) F= 54 x 10-3 N Since we are dealing with two charges, the force exerted on each other will be the same, so the negative sign on charge q2 will be neglected. Calculate the magnitude of the net electrostatic force on charge q1 due to the charges q2 and q3. The force exerted by q2 on q1 is F12; since the charge is positive, they attract each other. F13 is the force exerted by q3 on q1. This is also called an attractive force. Here the force points are in different directions, so we the vector components to calculate the net electrostatic force. The magnitude of the net electrostatic force is as given: The force exerted on q1 due to q2: F12= 9 x 109 x 3 x 10-6 x 5 x10-6 / (0.10 x 0.10) F= 13.5 x 10 N The force exerted on charge q1 due to q3: F13 = 9 x 109 x 3 x 10-6 x 2 x 10-6 / (15 x 15) F13 = 2.4 N How to Calculate Net Electrostatic Force Electrostatics is the part of physics that deals with the study of a phenomenon where the charges are present in static equilibrium, i.e. when the charges move extremely slow. The one main reason for charges to be in equilibrium is that they move rapidly because of the strong electric force present. The basic phenomenon in static electricity is that the charges are transferred from one body to another. The object that loses charge an electron becomes positively charged, and the one that gains an electron becomes negatively charged. Let’s say we have two charges, one negative and the other positive. These two charges are showed by q1 and q2 . r be the distance separating the charges. Here we calculate the Coulomb force since the charges are unlike; they attract each other. The direction of the force is different, but the magnitude is the same. And this is due to the fact that the coulombs force is a vector quantity. The force exerted by charge q1 is denoted by F1, and the charge denoted by charge by q2 is denoted by F2. But we consider it on the whole as the force of attraction. Force of attraction acting on charge q1 due to q2, so the force is written as F12. Similarly, the force of attraction acting on charge q2 is written as F21. Now considering the above-given explanation, we now derive the net electrostatic force as F = k q1 q2r X r where k is the proportionality constant having the value 9 x 109. This is the basic formula to calculate electrostatic force, and based on the problems given; the formula takes a change likely to be the magnitude and the angle of the net electrostatic force.
The Basics of a Programming Language When a program is run, several things happen: - Your source code (usually text files) is read - The computer builds a data structure (usually some kind of tree) that describes what you are actually asking of it - The computer walks the nodes of this tree and actually performs the operations described In a traditional, compiled programming language, the firs two steps usually happen on the developer’s computer. The result is then written to disk somehow, and sent to the users, who then launch the program. The given result is then fed directly into the CPU, which actually executes the last step and does the work. So really, there are 4 corner-points to running a program: - Caching (i.e. saving the result of a step to disk for later use) These steps all take different amounts of time, and usually we language makers try to make this time as short as possible, so computers will be fast. But not all time spent is equal. Where can time be spent? - compilation overhead: Time for people writing a program until they have a runnable program for their code - user overhead: Time for users of the program until the program starts running - execution time: Time a program actually takes to produce its result on the user’s machine Also, a programming language can have memory requirements, which usually boil down to: - peak RAM usage: How much RAM does the program use, worst case - per instruction RAM usage: How much RAM is minimally needed to run one instruction - disk usage: how much disk space is used for the cache And again, these are relevant both to the developer writing the program, as well as a user running the program (which may sometimes be the same person, but e.g. in the case of your text editor aren’t). I know this is a bit dry, just go back to these definitions when I mention them later on and you’ll be fine. How an interpreter works An interpreter is a program that reads your text file, line-by-line, and as soon as it has enough information to understand a line, it will do what that line says. In fact, it may not even wait that long. If a certain expression on that line is complete, it may just do that expression, then continue reading the rest of the line. Interpreters are basically the naïve approach to running a program, how we as a human do it each day. We have a list that describes a process, and we do each item. If we go back to our 4 corner points above, a pure interpreter basically reads (1) a bit of code until it understands (2) it, then executes (4) it and doesn’t do any caching (3). This means an interpreter is really quick at getting to the point where the first line of your program is executed in absolute time. It only needs to read the first line and can immediately run it. The compilation overhead is basically 0. You just write the script to be interpreted and send it off. The user overhead is fairly small. It includes reading and understanding the code, but only one line. Then it immediately executes. Total execution time is a bit longer, though: Before a line can be run, it has to be read and understood on the user’s computer. So between each line is a bit of reading and understanding. Worse, since a pure interpreter performs no caching, if there are loops, each line of the loop needs to be read and understood every time it is executed. And how is memory? An interpreter needs little disk space: Basically, only your source code. It also needs a little bit of RAM per instruction to keep the current line in RAM. Peak RAM usage is fairly low. How a compiler works As mentioned above, the other extreme, a pure compiler, Reads (1) the entire code once, understands (2) it, then saves that code to disk (3) and then that is sent to the user, which can just let the CPU execute (4) it. Performance wise, that means the compilation overhead is fairly significant. The developer waits a while for the entire program to be read, understood and written. The user overhead, on the other hand, is quite low: The entire program needs to be loaded into RAM, but it is in the compact form the CPU understands. But then the user can immediately run any instruction in the program. How is memory usage? Well, the developer needs enough memory (and disk space) for the entire program to be translated. The user needs enough memory to load the entire program into RAM. But the user usually needs less disk space because the program is in a smaller, binary form than the text form of the original source code. So I’ve been harping on about the distinction between the developer and the user, and overheads. Why? Isn’t it clear-cut that compilation makes for faster programs, and interpreters are crap? It isn’t. I’ve been talking about “pure” compilers and “pure” interpreters here, but nobody makes a “pure” anything. That’s what the words mean, but usually you find a comfortable spot on the continuum between those two extremes, or you combine both approaches at different levels, to yield performance benefits to your users. The idea behind a scripting language is usually that it is written by the user to automate some things on their computer. Scripts are frequently modified to suit the user’s needs, or expanded to account for newly-discovered requirements. Or they are one-offs that are written once, and run once. Yes, there are scripts that made it into production, but what I describe, again, is the ideal of a script, and what makes it different from a program. Like a film script, it orchestrates existing systems to collaborate. Scripting languages are awesome wherever things change often and fast turnaround is needed. So scripts are usually written and run by the same person. The distinction between time and disk space taken for the developer and the user doesn’t matter here. How does the value proposition for a compiler look here? Bad. My script is on disk as a text file. Now the compiler generates an executable from it, which will take even more disk space. It will also take its sweet time doing so. Only once that is done, and the entire program has been translated, will it deign to run my program. Worse, if I made a mistake, I will have to recompile the entire script again after fixing the issue. The interpreter is suddenly looking much better: It doesn’t need extra disk space just to run my script, and it starts running it right after it read the first line. If I made a mistake and the program aborts with an error, it gets even better: The interpreter didn’t even bother reading the rest of the script. If my script has several parts dealing with different things, it can even run parts and only spend time reading/understanding the parts I am actually using. The rest of my script is ignored until it is actually used. There’s no faster way to do work than realizing you don’t need to do that work. So you see, depending on your use case, interpreters are better. Compiled programs are mainly good for split use, like commercial application software, where one person (the software company’s developer) can do some of the work ahead of time, and thus save it for everybody else. You’ve probably heard of Virtual Machines. They’re very popular with people who make programs that need to run on multiple different computer systems, with different operating systems, different CPU architectures etc. They are needed because compiled software runs directly on the CPU. It is written in the language of that CPU. So if you want to run the same program on multiple CPUs, you need to provide multiple versions of your program translated into each CPU’s language (and if you speak a foreign language, you will know that translation never just involves substituting one word for another – sometimes there is no word that means the exact same thing in another language, and you have to approximate with several words what that one word meant). And you need to devise a way so each user gets the right version for their platform. This works, but given programs on web sites are usually not changed by users, wouldn’t it be cool if we could compile them on the server and be faster? That’s what Virtual Machines do. A Virtual Machine is, essentially, a program that simulates the behaviour of a CPU. It is an interpreter. It takes instructions written in a made-up CPU’s language, and reads, understands and executes them right away on the real CPU. It is also really, really fast. Not as fast as native machine code doing the same thing running on the same CPU, but still fast enough. It is also comparatively small, and easy to port to another platform. Usually a Virtual Machine does some more involved things than a CPU, too. Many Virtual Machines have built-in instructions that do things like create a window, draw into it etc. These translate to hundreds if not thousands of instructions of the native CPU. Why? Because those are the things we need to do on every platform, and they need to be ported as well. So everything that must be rewritten for every CPU and operating system just gets wrapped up in that single program, the Virtual Machine. And you write a new, small Virtual Machine for each CPU/operating system combination you support. Everything else can now be written in the common language all the Virtual Machines for each platform implement. “Write once, run anywhere”, as the saying goes. And for this common language, we can write a compiler. Yes, you heard right: A Virtual Machine consists of both an interpreter and a compiler. The compiler does a lot of work beforehand, creating a small, fast binary representation of your program that can be sent to every user, no matter what platform they’re running. It will take a slight speed hit compared to a native program, but it will make up for that because it is simpler to distribute than a real compiled binary, and smaller and faster than a script. Also, you’re not sending your source code out to everyone… A Virtual Machine makes sense where you can assume that all your users already have the Virtual Machine on their computers, e.g. because enough other programs use the same VM that they’ve already installed it. If you write your own Virtual Machine, you’ve only pushed the problem down one level of abstraction: Instead of having to distribute your program for each platform, you need to distribute the VM for each platform, and then your users also need to download the actual program. That doesn’t make sense unless you expect the average user to download several programs written against the same VM, or the VM rarely changes, while your program frequently does. Just-in-Time compilers, or JITs are another hybrid approach. They can be applied to any interpreter (even to our Virtual Machine above). What they basically do, is add a level of cacheing to an interpreter. Yes, a JIT-compiler is actually a component of an interpreter. Basically, what a JIT does is address the case of interpreted code being executed several times. If it notices a piece of code that is being run a second time, or encounters code that is expected to be run repeatedly (like a loop), it actually compiles that code into native machine code and keeps that native machine code cached in RAM. Which means that, the initial user overhead of loops becomes a bit larger, but you likely get the time spent compiling the entire loop body back due to the faster execution time later on. This is a very specific optimization you can make for an interpreter, but a surprisingly effective one, particularly for Virtual Machines, where the only reason you didn’t compile to native is that you need a common CPU architecture for easier distribution of your software to a varied selection of hardware. Many VMs compile all code you run (e.g. each function), keep it cached in RAM, and then purge the code that hasn’t been run in a while if the cache exceeds a certain size. The Wild Mixing and Matching of Modern Web Browsers There are many ways you can combine compilers and interpreters, and steps of the process that you can cache, and also with contextual knowledge of your application, you can choose different optimizations in different situations. Take scripts on web sites, for instance. They are scripts, because the original idea was that everyone would have their own web site and write the scripts themselves. Having a bunch of text files on a server and running the actual code in the client just seemed easiest and was how things were done back when the web was invented in the 90ies. They are scripts because that’s most accessible and it makes it easy to learn from other web sites when you make your own. They are also scripts because they were to be embedded in HTML text files. But most web sites these days are used by millions of users, none of which edit these scripts, so it actually makes more sense to optimize to reduce user overhead and execution time instead of quick turnaround for the developers. For example, the onLoad() script on a web page is run exactly once, when the page is loaded. So we just run that in a traditional interpreter. On the other hand, the onKeyUp() handler on a web page is called for each key press in a text field. Given text fields usually contain more than one character, we can assume that any onKeyUp() function gets called several times. So it makes sense to compile any such function and keep it around as long as the page is loaded. And many functions on a web page are only run when you actually use that feature on the page. So the browser will just do a quick cursory scan over the entire script (so it knows where each function’s text starts and ends and what its name is) and defer actually compiling and optimizing and running its code until you actually call it. And if you call it from e.g. onUnload(), it won’t even bother compiling/optimizing them, and will just interpret them. And of course there is the browser itself, a program that is developed by a company and used by users to run web pages, a classic compiled application. A lot was spent on a developer’s computer making it smaller and faster before it gets to you, and you can just run its native code right away. So that’s the difference between compilers and interpreters: Two sides of a spectrum, that can even be used together. Two points in the four-corner diagram square of “Read, Understand, Execute, Cache”. Note: This post came out of my answer to a Stackoverflow question, in case you are looking for a shorter, slightly different answer to this question. The goal of this article is not to promote good usability practices or to explain how your operating system’s program loader works. Therefore, I have simplified a few things from those domains and focused on explaining compilers and interpreters. For example, most operating systems do not load an entire program file into RAM to run it, but rather transparently load parts using “virtual memory”. The general issue holds true, the order of magnitude may just be lower. Similarly, cross-platform code can be very convenient but suffers from several issues, like looking and feeling the same on all platforms, even if your user only uses it on their platform, where they are used to a different look-and-feel. Or the additional abstractions you build on top of the operating system-specific instructions may slow things down on platforms which have a more efficient abstraction (To return to our foreign language metaphor: your phrases may become longer if your translation needs several words to describe your language’s one-word concept, making you take longer to say the same thing, despite there being a way for a native speaker to re-order the whole sentence that would result in one much shorter).
What happens when a scientist conducts an experiment? When possible, scientists test their hypotheses using controlled experiments. A controlled experiment is a scientific test done under controlled conditions, meaning that just one (or a few) factors are changed at a time, while all others are kept constant. What do you do if your results do not match your hypothesis for an experiment? What Is the Next Step if an Experiment Fails to Confirm Your Hypothesis? - Complete the Write-Up of What Took Place. The write-up is part of the evaluation process of the experiment. - Make Slight Changes in the Process. - Consider Whether the Experiment Was Carried Out Correctly. - Alter the Experiment. - Revise the Hypothesis. What scientists use to test a prediction when they Cannot use an experiment? When the use of experiments to answer questions is impossible or unethical, scientists test predictions by examining correlations. What is the outcome when the scientific method is not followed? the meaning of “falsifiable,” the outcome when the scientific method is not followed. What are the 7 steps in scientific investigation? The scientific method - Make an observation. - Ask a question. - Form a hypothesis, or testable explanation. - Make a prediction based on the hypothesis. - Test the prediction. - Iterate: use the results to make new hypotheses or predictions. What order is the scientific method? The basic steps of the scientific method are: 1) make an observation that describes a problem, 2) create a hypothesis, 3) test the hypothesis, and 4) draw conclusions and refine the hypothesis. What is the correct order of the steps in the scientific method quizlet? Terms in this set (6) - Step One: Ask a Question. Develop a question or problem that can be solved through experimentation. - Step Two: Form a Hypothesis. Predict a possible answer to the problem or question. - Step Three: Test the Hypothasis. - Step Four: Analyze the Results. - Step Five: Draw a conclustion. - Step Six: Share the Results. What is the final step in the scientific method? The final step in the scientific method is the conclusion. The conclusion will either clearly support the hypothesis or it will not. If the results support the hypothesis a conclusion can be written. What is a good scientific method question? A good scientific question is one that can have an answer and be tested. For example: “Why is that a star?” is not as good as “What are stars made of?” 2. A good scientific question can be tested by some experiment or measurement that you can do. What is the hardest question in science? 12 Tricky Science Questions - Why is the sky blue? - Why does the moon appear in the daytime? - How much does the sky weigh? - How much does the Earth weigh? - How do airplanes stay in the air? - Why is water wet? - What makes a rainbow? - Why don’t birds get electrocuted when they land on an electric wire? What type of questions should not be used in the scientific method? Questions that cannot be answered through scientific investigation are those that relate to personal preference, moral values, the supernatural, or unmeasurable phenomena. What 3 things make a question scientific? A good scientific question has certain characteristics. It should have some answers (real answers), should be testable (i.e. can be tested by someone through an experiment or measurements), leads to a hypothesis that is falsifiable (means it should generate a hypothesis that can be shown to fail), etc. How do you form a hypothesis? In order to form a hypothesis, you should take these steps: - Collect as many observations about a topic or problem as you can. - Evaluate these observations and look for possible causes of the problem. - Create a list of possible explanations that you might want to explore. Which serves as evidence for a scientific claim? A scientific claim involves controlled experiments which are related to the claim and on the basis on these experiments further conclusion can be drawn which supports the claim. Thus, we can conclude that controlled experiments serves as evidence for a scientific claim. What makes a good hypothesis? A good hypothesis is stated in declarative form and not as a question. “Are swimmers stronger than runners?” is not declarative, but “Swimmers are stronger than runners” is. 2. A good hypothesis posits an expected relationship between variables and clearly states a relationship between variables. What is a good hypothesis example? Here’s an example of a hypothesis: If you increase the duration of light, (then) corn plants will grow more each day. The hypothesis establishes two variables, length of light exposure, and the rate of plant growth. An experiment could be designed to test whether the rate of growth depends on the duration of light. How do you know if a hypothesis is falsifiable? A hypothesis or model is called falsifiable if it is possible to conceive of an experimental observation that disproves the idea in question. That is, one of the possible outcomes of the designed experiment must be an answer, that if obtained, would disprove the hypothesis. What are the 5 components of experimental design? The five components of the scientific method are: observations, questions, hypothesis, methods and results. Following the scientific method procedure not only ensures that the experiment can be repeated by other researchers, but also that the results garnered can be accepted. What comes first prediction or hypothesis? OBSERVATION is first step, so that you know how you want to go about your research. HYPOTHESIS is the answer you think you’ll find. PREDICTION is your specific belief about the scientific idea: If my hypothesis is true, then I predict we will discover this. CONCLUSION is the answer that the experiment gives. What is it called when a hypothesis is correct? It is called ‘a rejected hypothesis’. A hypothesis can be valid or invalid, accepted or rejected. A hypothesis is subject to test (called ‘proof’) and if the hypothesis is valid (highly probable) then the proofs it is subjected to will not disprove the hypothesis. What are the 3 types of hypothesis? Types of Research Hypotheses - Alternative Hypothesis. The alternative hypothesis states that there is a relationship between the two variables being studied (one variable has an effect on the other). - Null Hypothesis. - Nondirectional Hypothesis. - Directional Hypothesis.
A Glossary of Grammatical Terminology, Definitions and Examples - Sounds and Literary Effects in Language, Speaking, Writing, Poetry.. This glossary of linguistics, literary and grammatical terms is aimed to be helpful for writers, speakers, teachers and communicators of all sorts, in addition to students and teachers of the English language seeking: - to understand the different effects of written and spoken language - what they are called, from a technical or study standpoint, - to develop variety, sensitivity, style and effectiveness in your own use of language - written and spoken - for all sorts of communications, whatever your purposes, and - to improve understanding and interpretation of the meaning of words without having to look them up in a dictionary. There are very many different effects of written and spoken language. Most people know what an acronym is, or a palindrome . But what is a glottal stop ? What is a tautology , or a gerund ? What is alliteration and onomatopoeia ? What are the meanings of prefixes , such as hypo/hyper and meta , and suffixes such as ology and logue ? Words alone convey quite basic meaning. Far more feeling and mood is conveyed in the way that words are put together and pronounced - whether for inspiration, motivation, amusement, leadership, persuasion, justification, clarification or any other purpose. The way we use language - in addition to the language we use - is crucial for effective communications and understanding. The way others use language gives us major insights as to motives, personalities, needs, etc. The study and awareness of linguistics helps us to know ourselves and others - why we speak and write in different ways; how language develops; and how so many words and ways of speaking from different languages share the same roots and origins. Also, our technical appreciation of language is a big help to understanding language more widely, and particularly word meanings that we might not have encountered before. Knowing these and many other aspects of linguistics can dramatically assist our overall understanding of language, including new words, even foreign words, which we might never have seen before. Some of these language terms and effects are vital for good communications. Others are not essential, but certainly help to make language and communications more interesting, textured and alive - and when language does this, it captivates, entertains and moves audiences more, which is definitely important for professional communicators. Note that many of these words have meanings outside of language and grammar, and those alternative non-linguistic definitions are generally not included in this glossary. listing of terms for grammatical, literary, language, vocal and written effects a - the word 'a' is grammatically/technically 'the indefinite article' (compared with the word 'the' , which is 'the definite article') - for example 'A bird fell out of the sky', or 'Muddy children need a bath'. This use of the word a is derived from old English 'an', which is a version of 'one'. A - usually capitalized, 'A' is a common substitute word or 'placeholder name' used where the speaker/writer finds it easier not to use the actual word/words, for example and especially in phrases such as 'My car simply gets me from A to B', or 'Tit-for-tat is when person A hits person B, and so person B hits person A in return', or 'Woman A has been married for 5 years; woman B has been...' a- - the letter 'a' is prefix , with various meanings, seen in different stages of word development from various languages, notably including the meanings: 'to', 'towards', 'on', 'at', 'of', or to express intensity, or being in a state of.., etc., for example afoot, awake, accursed, abreast, ajar, announce, etc. Not all words which begin with 'a' are using the 'a' prefix in this way. abbreviation - a shortened word or phrase. This can be done by various methods, notably: - using the initial letter(s) of a multi-word name or phrase - for example, BBC for British Broadcasting Corporation, or SA for South Africa, or ATM for automated teller machine, TV for television, CD for compact disc; or LOL for laughing out loud or SWALK for sealed with a loving kiss, (the latter two also technically being acronyms ). - omitting some or all the vowels of the word or words - for example, Rd for Road, or St for Street, or Saint, or Dr instead of Doctor, or Mr instead of Mister, or Sgt instead of Sergeant, - omitting and/or replacing letters which best enable those remaining to convey the full word, often also for euphonic reasons (i.e., the sound is pleasing to speak/hear) or otherwise clever phonetically (how it sounds), or clever visually - for example: bike for bicycle, or fridge for refrigerator, or pram for perambulator (perambulate means walk, formally or amusingly), or BBQ for barbecue, or SFX for sound effects - and in more recent years especially in electronic messaging using mixtures of letters and numbers, such as L8 for late, GR8 for great, 2 meaning to/too, B4 for before, etc. - omitting the beginning of a word or words - for example phone for telephone. - omitting a word-ending or phrase-ending - for example doc for doctor, amp for amplifier or ampere, artic for articulated lorry, or op for operation, or zoo for zoological garden. - combining parts of two words to form a new word, usually being a blended meaning as well as a blended word, also called a portmanteau word - for example brunch for breakfast, and smog for smoke and fog. Portmanteau words are not commonly regarded as abbreviations, but they certainly are. Many abbreviations, after widespread and popular adoption, become listed in dictionaries as new words in their own right. The full original versions of many such abbreviations become forgotten, so that they are not generally regarded as abbreviations (for example the words zoo, taxi, phone). acronym - an existing or new word that is spelt from the initial letters, in correct order, of the words of a phrase or word-series, for example NIMBY (Not In My Back-Yard) and SCUBA (Self-Contained Underwater Breathing Apparatus). Technically an acronym should be a real word or a new 'word' that is capable of pronunciation, otherwise it's merely an abbreviation . By definition, all acronyms are also abbreviations. Also technically an acronym should be formed from the initial letter of all words in the phrase or word-series. Commonly the rules are bent when acronyms are formed using the first and second letters (or more) from component words, and/or when words such as 'to' and 'the' and 'of' in the phrase or word-series do not contribute to the acronym, for example LASER (Light Amplification by the Stimulated Emission of Radiation). An acronym that is devised in reverse (i.e., its full meaning/interpretation refers directly or indirectly alludes to the abbreviated form) is called a bacronym, or backronym, or reverse acronym, for example CRAP (Chronologically Ascending Random Pile), and DIARRHOEA (Dash In A Real Rush, Hurry Or Else Accident). See lots of useful and amusing acronyms and bacronyms . acrostic - a puzzle or construction or cryptic message in which usually the first or last letters of lines of text, or possibly other individual letters from each line, spell something vertically, or less commonly diagonally, downwards, or upwards. From French acrostiche, and Greek akrostikhis, and the root Greek words akro, meaning end, and stikhos, meaning a row or line of verse. A notable and entertaining example of the use of acrostics in cryptic messaging is the case of British journalist Stephen Pollard, who reportedly registered his feelings about Richard Desmond's 2001 acquisition of his employer, the Daily Express, by spelling the words acrostically: 'F*** you Desmond', using the first letter of the sentences in his final lead article for the paper. accent - accent refers to a distinctive way of pronouncing words, language or letter-sounds, typically which arise in regional and national language differences or vernacular . For example 'an Australian accent'. Accent also refers to types of diacritical marks inserted above certain letters in certain words to alter letter sound, for example in the word café. Accent may refer more generally to the mood or tone of speech or writing, or technically to emphasis in poetry, and also to musical emphasis, from where the word derives. The origins of the word accent are from Latin, accentus, tone/signal/intensity, from ad cantus, 'to' and 'song'. active - in grammar, applying to a verb's diathesis / voice , active (contrasting with its opposite ' passive ') generally means that the subject is performing the action (to an object ) - for example, 'The chef (subject) cooked (verb) dinner' (object)', (active voice/diathesis), rather than passive voice/diathesis: 'Dinner (object) was cooked (verb) by the chef' (subject), (passive voice/diathesis). adjective - a 'describing word' for a noun - for example big, small, red, yellow, fast, slow, peaceful, angry, high, low, first, last, dangerous, heart-warming, tender, brave, silly, smelly, sticky, universal.. There are tens of thousands of others, perhaps hundreds of thousands. A 'sister' term is adverb , adverb - a word which describes a verb - for example quickly, slowly, peacefully, dangerously, heart-warmingly, bravely, stickily, universally. -age - a common suffix added to word stems to create a noun, especially referring to the result of an action/verb, typically collective or plural noun that expresses a potential to be measurable, for example: wreckage, spillage; wastage, leverage, haulage, blockage, etc. Coin is extended to coinage, to produce a collective/plural noun from a singular noun. Out is extended to outage to produce a noun from a preposition. allegory - a story or poem or other creative work which carries and conveys a hidden or underlying meaning, typically of a moral or philosophical nature. Originally from Greek, allos, other, and agoria, speaking. Allegorical refers to a work of this sort. alliteration - where two or more words that are adjacent or close together begin with or feature strongly the same letters or sounds, for example, 'double-trouble bubbling under', or 'big black beanbag', or 'Zambia zoo's amazing zig-zagging zebras'. Alliteration is commonly used in poetry and other forms of writing which seeks to entertain or please people. This is because alliteration itself is a pleasing, almost musical, way of constructing words, both to speak and to hear. Shakespeare used alliteration a great deal in his plays and other works, as have most other great writers throughout history. Where alliteration involves repetition of syllables and prolonged sounds, rather than merely single consonants or vowel sounds, it may also be defined as reduplication . allonym - this is a pseudonym which is actually a real name - specifically applying to 'ghostwriting' (where a professional writer writes a book or a newspaper article, etc., by agreement from the person whose name is being used to 'front' the piece) - an allonym also technically refers to the illicit use of another person's name in creating work which purports to be written by the named author, rather like a forger in art. allophone - in grammar an allophone refers to variant of a single sound (a phoneme ) which is pronounced slightly differently to another variant. Examples of allophones are the different 'p' sounds in 'spin' and 'pin', and the different 't' sounds in 'table' and 'stab'. Commonly the differences between allophones so slight that most people are unaware of them and would consider the sounds to be identical. The word derives from Greek 'allos' meaning other. alphagram - an anagram (although not necessarily a meaningful or even pronounceable word, as usually defined by the word anagram) in which the letters of the new word or phrase are in alphabetical order, such as the anagram 'a belt' for the source word 'table'. alphastratocus - the @ symbol - more commonly called the asperand . ambigram - a relatively recent term for a 'wordplay' concept which dates back hundreds of years, an ambigram is a word or short phrase which can be read in two different ways (from two different perspectives or viewpoints) to produce two different words/phrases, or different forms of the same word/phrase. Commonly the second perspective is upside-down, and the different words/phrases are related, although neither of these features is an essential requirement of an ambigram. In modern times the ambigram has been popularized by the tattoo industry, and certain online/computer technologies which generate ambigram designs. Other less popular/obvious forms of ambigrams entail several different pairings of views, for example achieved by: background/negative/'figure-ground' (where the gaps between the letters are in the shape of other letters and produce a word/phrase); rotational (typically 180 degrees); 3D (three-dimensional effect); mirror images (sideways); language translation; circular letter chain (within which two different words have different starting and ending points); 'fractal tiled' (whereby zooming in or out of pixilated/tiled word-form produces new renderings); the 'natural' effect (requiring no great distortion - such as the lower-case lettered words 'suns', 'pod', 'bog', which read the same upside-down with little or no adaptation); and other variations. Ambigrams may comprise upper or lower case letters or a mixture. Some word combinations naturally produce more pleasing and legible ambigrams than others, requiring very little distortion of the letters. An early example of a 'natural' ambigram is the word 'chump', which in lower-case script lettering reads easily as the same word when viewed upside-down, and this example seems first to have been publicized in 1908. Interestingly and coincidentally the word 'ambigram' can be made very easily into an 'upside-down' type of ambigram. ampersand - the 'and sign' (&). The word ampersand is a distorted derivation from 'and per se'. The symbol is a combination of the letters E and T, being the Latin word 'et' meaning 'and'. More detail about the ampersand origins . anagram - a word or phrase created by rearranging the letters of a word or name or phrase, such as pea for ape, or teats for state. An anagram is more impressive when the new word/phrase cleverly or humorously relates to the source word/phrase, for example 'twelve plus one', is an anagram of 'eleven plus two', or the often-quoted 'dirty room' is an anagram of 'dormitory', and 'here come dots' is an anagram of 'the morse code'. analepsis - more commonly called a 'flashback' or 'retrospective' - analepsis is narrative or action of a story before the 'present' time (in the work), usually for dramatic and explanatory purpose. The opposite is prolepsis. The term is broadly based on Greek medicinal term analeptikos, meaning 'restorative'. analogy/analogous/analogue - refers to a comparison between two similar things, in a way as to clarify their differences, similarities, and their individual natures. As a communications concept, especially in learning/teaching, the use of analogies (which are similar to and encompass metaphors and similes , extending to stories and fables , etc) is extremely powerful. The use of analogies is also beneficial for memory and information retention. The word analogue refers a corresponding thing, and is used traditionally in describing technologies which replicate/record/measure things using mechanical means, as distinct from more modern electronic/digital methods, for example in describing types of watches, audio-recorders and players, etc. The words are from Greek 'analogos' - ana, 'according to', and logos, 'ratio'. ananym - a type of anagrammatic word created by reversing the spelling of another word - for example Trebor, the confectionery company. Sadly it is difficult to find any other examples that are not scientifically or otherwise so obscure as to be utterly unremarkable. You will perhaps be able to invent better ones yourself. anaphor - a word or phrase that refers to and replaces another word, or series of words, used earlier in a passage or sentence - for example: "I looked in the old cupboard in the bedroom at the top of the stairs but it was empty.." - here 'it' is the anaphor for 'the old cupboard in the bedroom at the top of the stairs'. Another example is "I will eat, go for a walk, then sit in the garden; do you want to do this too?.." - here 'this' is an anaphor for 'eat, go for a walk, then sit in the garden'. A simpler example is "John woke; he rubbed his eyes.." - here 'he' is an anaphor for John. An anaphor is generally used to save time and avoid unwanted repetition. See cataphor, where the replacement word precedes a later word. anaphora - this has two (confusingly somewhat opposite) meanings, which probably stems from its Greek origin, meaning repetition. Firstly, simply, anaphora is the action of using an anaphor (a replacement word such as it, he, she, etc) in referring to a previous word or phrase, to avoid repetition and to save time. Secondly, and rather differently, anaphora refers to the intentional use of repetition, specifically a writing/speaking technique in rhetoric , where repetition of a word or phrase is used for impact at the beginning of successive sentences or passages. For example: "People need clothes. People need shelter. People need food.." Here the repetition of 'people need' produces a dramatic effect. A further more famous example is Winston Churchill's WWII "We shall fight on the beaches" speech: "We shall go on to the end. We shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our island, whatever the cost may be. We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender.." Here the dramatic repetition of 'we shall' and 'we shall fight' produces remarkable inspiring and motivational effect. The word epistrophe refers to this effect when used at the end of sentences or clauses. anonym - an anonymous person or publication of some sort, potentially extending to an anonymous internet/website posting. antanaclasis - a sentence or statement which contains two identical words/phrases whereby the repeated word or phrase which means something quite different to the first use, for example: 'Time flies like an arrow; fruit flies like a banana,' (here the words 'flies like...' mean firstly 'passes similar to...' and secondly 'flies [the insects] enjoy eating...'). Another often-quoted example of antanaclasis is the motivational threat attributed to American football coach Vince Lombardi: 'If you aren't fired with enthusiasm, you will be fired, with enthusiasm" (in which 'fired' firstly means 'motivated', and secondly means 'sacked', or dropped from the team). Antanaclasis is a form of pun , and is commonly used to illustrate the confusing and ambiguous nature of language/communications, especially in studying psycholinguistics (how the mind works in processing language). anthropomorphism/anthropomorphic - the attribution of human form or characteristics to non-human things, such as inanimate objects, or gods, or concepts such as the weather or economy, or a town or nation, or anything else that for dramatic/literary/humorous effect might be described or represented as having a human quality of some sort. For example the following are all very simple anthropomorphic expressions, or anthropomorphisms: a 'Happy Meal'; a 'friendly bar'; a 'weepy movie'; a 'computer that won't behave'; a 'dumb waiter'; a 'drink or chocolate bar that is my best friend'; 'music or art that speaks to me'; a sun image with a smiling face; a wind image of a person's face blowing hard; millions of cartoons and animations, such as cars with faces, or animals with human expressions and personalities; countless logos and brands which contain an image or icon with some sort of human quality or movement (a 'kicking K' for example, and anything with a smile or even wearing a hat); and all those digital media icons with faces. Anthropomorphism is everywhere, and plays a crucial part in human communications. (There that's another one... the suggestion that Anthropomorphism 'plays a part'..) antonym - a word which is the opposite in meaning in relation to another, for example, fast and slow, high and low, husband and wife, dead and alive, etc., (from Greek anti, against, and onuma, a name). Interestingly the antonym of the word antonym is synonym (a word which means the same as or equates to another). aphorism - a statement of very few words - for example a maxim or short memorable impactful quote - which expresses a point strongly, for example, 'No pain, no gain'. apocrypha/apochryphal - writings which are not authentic (for example falsely cited quotations or extracts, etc) but which may be presented or considered authentic - especially applying to claimed biblical works or ancient Chinese writings, and increasingly a term which applies generally to any old writings that lack a claimed or asserted authenticity. The word is Greek originally meaning 'hidden writings', from apokruptein, 'hide away'. apophony - this is a very broad term, referring simply to the alternation of sounds in a word stem which produces different tenses, meanings or versions of the word, for example sing, sung, sang. Apophony is also called ablaut, alternation, gradation, internal inflection, internal modification, replacive morphology, stem alternation, stem modification, stem mutation, among other variants of these. apophasis - a broad term for various types of communications and language techniques which infer or propose something by emphasizing what it is not, or by ironically rejecting or denying or introducing a notion, and then withdrawing or distancing oneself (the speaker) from the 'fact'. Examples are paralipsis and syllogism , and the game 'twenty questions' and the general concept of 'by exception' and the 'process of elimination'. apophthegm/apothegm - (helpfully the 'ph' and 'g' are silent - the word is pronounced 'appathem', emphasis on the first syllable - apothegm is the US-English spelling) - an apophthegm is a concise and very expressive saying, for example 'You get out what you put in', equating to an aphorism , originally from Greek, apophthengesthai, meaning 'speak out'. apostrophe - a punctuation mark (simply shown as ' ) which denotes ownership (as in John's books), or omitted letters (as in: you don't know, or rock'n'roll) or a quoted or significantly extracted/highlighted item (as in: the communication was worded very carefully because of 'political correctness'..) apposite/apposition - where two similar references appear together, typically without a conjunction, for example, 'my son the doctor'. aptronym - a person's name that matches his/her occupation or character, most obviously children's book characters such as the Mr Men series (Mr Messy, Mr Bump, etc), and extending to amusing fictitious examples such as roofer Dwayne Pipe, or parks supervisor Theresa Green, or yoga teacher Ben Dover, or hair-stylist Dan Druff. From apt, meaning appropriate, and Latin aptus meaning fitted. Apparently the term was first suggested by Franklin P Adams. Also called an aptonym or charactonym. argot - a word referring to a secret coded language of some sort, notably but not exclusively used by criminals, for example backslang or cockney rhyming slang ; argot ('argo') is originally a French/Spanish Catalan word for slang. Argo may also refer to jargon or terminology that is specific to a particular group or discipline, for example military folk, hobbyists, scientists, etc. articulation - articulation refers to the formation of clear sounds in speech, including vowels and more especially consonants . Technically this is analysed/achieved via the control of the airflow (of breathing while speaking) through, and by adjustment of, the various vocal organs and mouthparts, each of which produce a remarkably extensive range of possible sounds, which increases further when considering different cultures/languages around the world. Also technically, articulation - in referring to the use of airflow and vocal mouth-parts, and encompassing phonation - is one of the most important and fundamental ways by which the development and analysis of language are enabled. The word articulation is ultimately derived from Latin articulus, 'small connecting part'. See importantly 'places of articulation' . ASCII - (pronounced 'askee') stands for the American Standard Code for Information Interchange, established in the 1960s. ASCII is a widely used and prevalent system for coding letters and other characters for use on electronic text equipment, notably computers and the internet. asperand - the @ sign - also called alphastratocus - now widely used in computing, notably within email addresses where it stands simply for 'at'. Originally the 'at' sign was an accounting term meaning 'at the rate of', for example: 10 widgets @ £3 each = £30 total. asterisk - the star symbol (*) commonly used to signify that a supplementary note follows (also signified by an asterisk), or quite separately to substitute letters in offensive words in published text. autoantonym/auto-antonym/autantonym - one of two different words that have the same spelling (a homograph ) but opposite meanings, for example, fast (quick moving or firmly fixed). The term is from Greek auto, meaning self, and antonym , in turn from anti meaning against. Also called a contranym , contronym, antagonym, antilogy, enantiodrome, self-antonym, addad, didd, and Janus word . This peculiar phnomenon, called 'enantionymy' and 'antilogy', attracts a high level of interest among linguists, lovers of language and wordplay trivia. Here are some examples: Cap (limit, stop, and add to, increase); Outstanding (satisfactory, standard exceeded, and unsatisfactory, standard not met); Oversight (a check, monitor, and a neglect, omission); Weather (endure, stand test of time, and erode, wear down or denude); Clip (join two or more things together as with a paper-clip, and divide something into two or more pieces, as in clip an article from the paper or clip someone's hair); Dust (remove a layer of powdery substance, and apply a layer of powdery substance, as in dusting crops or dusting for finger-prints); Trim (add to or embellish, as in trim the Christmas tree, and cut away something, as in trim hair or a hedge); Cleave (split apart or break, and stick or adhere); Ravish (to violently abuse, and to delight); Sanction (a permission, and a preventative penalty); Sanguine (cheerful and bloodthirsty); Bolt (fixed, secure in place, and move fast, run away); Garnish (add to, embellish or decorate, and remove from - as in legally serving notice to seize money or assets); Bound (stay or fixed, and move, as in leap or travelling); Left (gone, and remaining); Mad (angry about, and attracted to); Livid (angry, and pallid, lacking colour and spirit); Wind-up (start something, like a clock or an argument, and finish something, like proceedings or a talk); Blow-up (inflate, create, e.g., a balloon, and destroy with explosives). Further suggestions always welcome. autonym - a word that describes itself (also called self-referential); for example noun is a noun , polysyllabic is polysyllabic , abbrv. is an abbreviation, and word is a word. Separately autonym refers to a person's real name, the opposite of a pseudonym . And separately again, an autonym may be a name by which a social group or race of people refers to itself. From Greek auto, self. axiom - a statement or proposition considered established, true, accepted, or a fact that is 'taken for granted'. For example: 'We need air to breathe,' or 'Many people find comfort in religion.' Seen critically, some axiomatic statements can be regarded as stating the obvious. Certain tautologies which seek to persuade people of a supposedly established viewpoint are commonly presented as being axiomatic, when in fact the basic assumption within the tautology is not actually an axiom, more a matter of opinion. Many cliches are offered as axioms, when actually often they are subjective, and opposing 'accepted' cliches exist. The word axiom derives from Greek 'axios', worthy. backslang - an informal 'coded' language made of reversed words, or with reversed elements within words, used originally by groups of people seeking to talk openly yet secretively among other people who did not belong to the group, for example historically by market traders within hearing of customers, or by gangsters. Backslang has been at various times popular among teenagers, and exists as a 'reverse' coded secret slang language in many non-English-speaking cultures. Some backslang expressions enter mainstream language and dictionaries, such as the word yob, a disparaging term for a boy. bacronym/backronym - a 'reverse acronym', i.e., an acronymic phrase or word-series which is constructed from its abbreviated form, rather than from its full form (as is the case with a conventional acronym). The abbreviated form of a bacronym is usually a recognizable word or name, whose full 'meaning' is constructed from words whose sequence and initial letters letters match the abbreviation, for example YAHOO = Yet Another Hierarchical Officious Oracle, or IBM = I Blame Microsoft. The full form is commonly a humorous or clever or ironic reference to the word or name spelled by the abbreviation. The word bacronym/backronym is combination ( portmanteau ) word made from back or backward and acronym. See the acronyms and bacronyms listing for lots of examples. bathos - in language, especially poetic and dramatic, a jarring and usually funny mood-change or anti-climax caused by unexpectedly introducing a crude/rough/basic notion immediately after a (usually much longer) sublime/inspiring/heady/exalted/or otherwise uplifting passage of words. The mood-shift is one of 'down to earth with a bump', as if to give the reader/audience suddenly a surprising sense of ordinariness, or ridiculous contrast, after first establishing an atmosphere of higher, grander thoughts and images. For example, "...the new vicar was making a deeply moving impression on the congregation, with a sermon of profound meaning, soaring inspiration, and heartfelt compassion. He paused dramatically, before delivering his final uplifting conclusion, and, re-tasting last night's vindaloo and half-bottle of brandy, was sick on a choirboy..." bilabial consonant - a consonant articulated with both lips. There are hundreds of technical variations of pronunciation. This is one example of a group of them. See places of articulation to understand where/how vocal word/letter sounds are made. See also the International Phonetic Alphabet and related IPA chart (pdf) for diagrammatic explanation and detail of what these sounds are called, and the symbols used to denote them. It's fascinating. (The IPA chart is published here under the following terms of reproduction permission: IPA Chart, https://www.langsci.ucl.ac.uk/ipa/ipachart.html, available under a Creative Commons Attribution-Sharealike 3.0 Unported License. Copyright © 2005 International Phonetic Association.) bullet point/bullet-points/bullets - an increasingly popular and very effective way of presenting information, by which a series of (usually) brief sentences, each dealing with a single separate issue, are each prefaced by a large dot or other symbol (sometimes a bullet or arrow, or asterisk , or some other icon , to aid clarity of presentation and increase emphasis). The 'bullets' (the actual dots or marks) act like exclamation marks, but at the beginning rather than the end of the sentences. Some folk debate whether bullet points should follow grammatical rules for sentences or not, i.e., begin with a capital letter, end with a full stop, etc., although in most usage bullet points do not, and actually for good effect need not, and so are unlikely to conform more in the future. Professional writers and presenters tend to support the view that there is an optimum number of bullet points when presenting information that is designed to persuade people and be retained, and this ranges between 3 and 7 points, suggesting that 5 points is a good safe optimum. Obviously where bullet points are used in different situations, such as detailed listings and extensive summaries, the notion of an optimum persuasive number no longer applies, and in these circumstances anyway numbered points are usually more beneficial and effective. Whatever, for hard-hitting brief presentations of information/arguments, bullet points are often an unbeatable format. cacophony/cacophonous - in linguistics this refers to unpleasant sounding speech, words, or ugly discordant vocalizing. It is the opposite of euphony, and like euphony, cacophony is a significantly influential concept in the evolution of language, according to the principle that human beings throughout time have generally preferred to use and hear pleasing vocal sounds, rather than unpleasant ones. Euphonic words and sounds tend to flow more easily from the tongue and mouth than cacophonous utterings, and so this affects the way words and language evolve. The word is from Greek kakos, bad, and phone, sound. See euphony . cadence - in linguistics cadence refers to the fall in pitch of vocalized sounds at the end of phrases and sentences, typically indicating an ending or a significant pause. It's from Latin cadere, to fall. More generally cadence may refer to modulation or inflection in the voice or speech delivery. CamelCase - a style of text layout, popularized in the computer/internet age, which uses no spaces, instead relying on capital letters to show word beginnings. The term 'camel' alludes to humpy wordshapes. capitonym - word which changes its meaning and pronunciation when capitalised; e.g. polish and Polish, august and August, concord and Concord - from capital (letter). cant - a cant is a secret or coded language used by a group for secrecy, it equates to an argot. cataphor - a word or phrase that refers to and replaces another word, or series of words, used later in a passage or sentence - for example: "It was empty; the old cupboard was bare.." - here 'it' is the cataphor for 'the old cupboard'. Another example is "When it had to compete against social networking, TV became less dominant.." - here 'it' is the cataphor for TV. See anaphor. cataphora - the action of using a cataphor in writing or speech to avoid repetition, or for dramatic effect, i.e., the use of a replacement word in a passage instead of its subsequent equivalent. From Greek kata, down, but based on the same pattern as anaphora. clause - technically in grammar a clause is a series of words which stands alone as a phrase which makes sense and conveys a meaning but which is shorter than a sentence . More loosely a clause is interpreted to mean a sentence or statement, especially in formal documents. cliche/cliché - a written or spoken statement commonly and widely used by people in conversation, other speech, and written communications, generally regarded to lack original thought in application, although ironic or humorous use of cliches may be quite clever use of language. The use of cliches in high quality original professional written/printed/online communications, materials, presentations, books, media, and artistic works is generally considered to be rather poor practice. This is because cliches by their nature are unoriginal, uninspiring and worse may be boring, tedious and give the impression of lazy thoughtless creative work. There are thousands of cliches, and they appear commonly in day-to-day speech, emailing, texting, etc., and in all sorts of produced media such as newspapers, radio, TV, online, etc. Virtually everybody uses many cliches every day. The word is from French clicher, 'to stereotype'. Examples of cliches are sayings such as: 'That's life,' 'Easy come easy go,' 'Fit for a King,' 'All in a day's work, 'All's fair in love and war,' and 'Many a true word is spoken in jest'. Many similes have become very common cliches, for example: 'Quiet as a mouse,' 'Selling like hot cakes,' 'Went down like a lead balloon,' 'Dead as a dodo,' 'Fought like a lion,' 'Black as night,' and 'Quick as a flash.' Many metaphors have become popular cliches, for example: 'Pigs might fly,' 'Beyond the pale,' 'On cloud nine,' 'Gone for a Burton,' and 'The full Monty'. See lots more examples of cliches and their origins . A cliche is often alternatively and more loosely called an expression or a figure of speech . cockney - cockney refers to the dialect of traditional east-central London people ('eastenders', also called cockneys). Examples of cockney speech are heard widely in film and TV featuring London stereotypes of 'working class' people, for instance in the BBC soap Eastenders, films about Jack the Ripper, London gangster movies, 'The Sweeny', and other entertainment of similar genre. The cockney dialect features lots of 'dropped' consonant letters (commonly t, h, replaced by glottal stops , due to the 'lazy' or 'efficient' speech style, for example words such as hunt, house, heat, cat and headache, are pronounced 'un', 'ouse', 'ea', 'ca' and 'edday', with glottal stops replacing the dropped letters. Also, the 'th' sound is often replaced by an 'f' or 'v' sound, for example in 'barf' (bath), 'muvva' (mother), and 'fing' (think). The term 'ain't' almost always replaces 'isn't'. cockney rhyming slang - an old English slang 'coded' language, by which the replacement word/expression is produced via a (usually) two-word term, the second of which rhymes with the word to be replaced. Commonly only the first word of the replacement expression is used, for example, the word 'talk' is replaced by 'rabbit', from 'rabbit and pork', which rhymes with 'talk'. Other examples of cockney rhyming slang may retain the full rhyming expression, for example 'gin' is referred to as 'mother's ruin'. See lots more information and examples in the cockney rhyming slang listing . Australian people use rhyming slang too, which is a development of the original cockney rhyming language. Many words have entered the English language from cockney rhyming slang, lots of which are not widely appreciated to have originated in this way, for example the terms 'scarper' (run away, from scapa flow, go), 'brassic' (penniless, from boracic lint, skint), and 'bread' (money, from bread and honey). comparative - refers to an adverb or adjective which expresses a higher degree of a quality, for example 'greater' is the comparative of 'great'; 'lower' is the comparative of 'low'. conjugation - this refers to verb alteration, or the resulting verb form after alteration, or a category of type of alteration, for reasons of tense, gender, person, etc. The basic word form, such as 'smile', is a lexeme ; 'smiled' is the past tense conjugation. The term 'past tense' may also be called a conjugation, since it refers to an alteration of a verb. conjunction - a word which connects two words or phrases together, for example, 'if', 'but', 'and', etc. consonant - a speech sound (and letter signifying one of these) made from obstructing airflow during the voicing of words. Words essentially comprise sounds which are consonants and vowels , and the representation of words in writing contain letters which are consonants and vowels. See places of articulation to see how consonant sounds are made. contraction - in linguistics, contraction is a shortening of a word, and also refers to the shortened word itself. This is a very significant aspect of language development. Contraction is a form of abbreviation towards which language naturally shifts all the time. The word goodbye is a contraction of 'God be with you'. The word 'pram' (a baby carriage) is a contraction of the original word 'perambulator'. the word 'bedlam' is a contraction of the original word Bethlehem (mental hospital). Combined abbreviated word forms such as don't, can't, should've, you're, I'm, and ain't, etc., are all contractions. Many words are contractions of older longer words, or of more than one word abbreviated by contraction into a shorter word. Contraction is mostly driven by unconscious human tendency to try to speak ( articulate ) more easily and efficiently, so that words flow and movement of mouth/tongue is minimized. Language naturally develops in this way. Words shorten, and spellings simplify over time. Elision - the omission of a sound or syllable in speech - is a major feature in many contractions, and illustrates how language develops according to popular usage, rather than according to rules offered by grammar education and dictionaries. Portmanteau words are also contractions, but of a different sort, not generally the result of elision, instead being usually a deliberate abbreviated word combination. contradiction - a view or statement which opposes another previous view or statement, or a statement or verbalized position which argues against itself, which commonly especially concerning brief statements is also called a 'contradiction in terms' . From the Latin root word elements contra, against, and dicere, speak. contradiction in terms - a short expression or statement which is self-contradicting, for example, 'a living hell' or 'drank myself sober'. A 'contradiction of terms' is also called an oxymoron . contranym/contronym - one of two words of the same spelling and opposite meanings, for example the word 'bolt' (which can mean fixed and secure in place, and the opposite meaning: move fast and run away). See autoantonym . conjunction - a word which joins two statements or phrases or words together, such as the words: if, but, and, as, that, therefore etc. copyright - the legal right (control and ownership) automatically belonging to the creator of artistic work such as writings, designs, artworks, and music, to publish, sell and exploit the work concerned. Copyright is a very significant concept in the creation of language-based works, such as poetry, books, and other writings. Importantly copyright makes it illegal to copy and exploit other people's work without agreement. Copyright usually exists for several decades, depending on territory and nature of work, and is subject to potentially highly complex law. Copyright may be sold, transferred, or the usage conditions relaxed, upon the wishes of the owner of the work. Contrary to popular view, copyright does not require registration. It exists automatically upon the creation of the work. If you merely scribble a pattern or a few original sentences on a piece of paper, that 'work' automatically is subject to your 'copyright'. Copyright normally includes a date of creation and/or publication and/or update or revision. Many printed works may contain copyright interests of several parties, for example, in the original created work, in the design/layout of the publication, and perhaps separately for pictures and diagrams created by other people. The creator of the work decides whether to transfer copyright to a buyer of the work, which is normally a matter of negotiation depending on the nature of usage, and the relative needs and powers of the buyer and seller. See also plagiarism . cruciverbalist - a crossword puzzle enthusiast/expert. declension - the altered form of the basic ( lexeme ) form of a noun or adjective or pronoun, for reasons of number, gender, etc. The word girl is a lexeme. The word girls is a declension. There are generally fewer declensions in English than in other languages such as French and German. demonym - also called a gentilic - the word demonym refers to the name for someone who lives in (or more loosely is from, or was born in) a country or city or other named place. Most demonyms are derived very naturally and logically from the place name, for example: American, Australian, Indian, Mexican, British, Scottish, Irish, although some vary a little more, such as Welsh (from Wales), Mancunian (from Manchester UK), Liverpudlian (Liverpool UK), Martian (Mars), and a few demonyms which are quite different words such as Dutch (from Holland/The Netherlands). The word demonym is recent (late 1900s) in this precise context with uncertain attribution, although the term demonymic is apparently first recorded (OED) in 1893 referring to a certain type of people in Athens, from deme, a political division of Attica in ancient Greece, in turn from Greek demos, people. determiner - in language and grammar a determiner is a modifying word which clarifies the nature of a noun or noun phrase - a determiner tells the listener or reader the status of something, for example, in terms of uniqueness, quantity, ownership, relative position, etc. Examples of determiner words are 'a', 'the', 'very', 'this', 'that', 'my', 'your', 'many', 'few', 'several', etc. diacritic - a sign or mark of some sort which appears with a letter (above, below or through it) to signify a different pronunciation. For example, accent, cedilla, circumflex, umlaut, etc. See diacritical marks . From Greek diakrinein, distinguish, from dia, through, and krinein, to separate. dialect - the language, including sound and pronunciation, of a particular region, area, nationality, social group, or other group of people. diathesis - equates to voice in grammar, i.e., whether a verb or verb construction is active or passive , for example, 'some nightclubs ban ripped jeans' is active diathesis, whereas, 'ripped jeans are banned by some nightclubs' is passive diathesis. In tactical or sensitive communications the use of passive or active diathesis is often a less provocative way of communicating something which implies fault or blame, for example, 'the photocopier has been broken' (passive voice/diathesis) is less accusatory/confrontational than 'someone has broken the photocopier' (active voice/diathesis). Common examples of this use of passive diathesis/voice are notices such as, 'thieves will be prosecuted' (passive), and 'breakages must be paid for' (passive), which are less confrontational/direct than, 'we will prosecute you if you steal from us' (active), and 'you must pay for anything you break' (active). However, given a different verb and context the active diathesis may be less threatening, for example 'the situation is challenging' (active), seems less onerous than 'we/you are challenged by this situation' (passive). Often the presence/potential presence of the word 'by' indicates that the diathesis/voice is passive. dichotomy - in linguistics, a dichotomy is a division or contrast between two things (ideas, concepts, etc) which are considered to be completely different, especially opposing or competing, for example which may arise in a debate or choice. The adjective dichotomous refers to something which contains two different or opposing or contrasting concepts, ideas, theories, etc. In some contexts a dichotomy is synonymous with a contradiction or with an oxymoron . (From Greek dikho, in two/apart, and tomy , which refers to a process.) dingbat - in written or printed language a dingbat is a symbol - most commonly an asterisk - substituted for a letter, typically several dingbats for several letters, to reduce the offensive impact of vulgar words, such as F**K, or S**T. Dingbats may also be used to substitute all letters in a vulgar word, notably for dramatic or amusing effect in cartoon talk bubbles, for example ***!!, or the probably somewhat ruder ¡*¿¿*¿$$?!!***!!. diphthong - a vocal sound of one syllable with two different qualities, one merging into the next, often very subtly indeed, produced by the combination of two vowels, whether the vowels are together (for example, as in road and rain), apart (as in game and side), or joined as a ligature (as in the traditional spelling of encyclopædia). Note that the two different vowel sound qualities are not easily discernible and many speakers of the language concerned will believe such sounds to be a single pure vowel sound as in a monophthong . A diphthong typically entails a very slight glide or slide a slightly different sound within the same syllable. See also triphthong , which refers to there being three different sound qualities in a single vowel-sound syllable. Monophthong refers to a single pure vowel syllable sound. The word diphthong derives from Greek di, twice, and phthongos, voice/sound. diphthongization/monophthongization - this is a significant feature of language evolution: The evolution of speech and dialect (increasingly across cultures) influences what we regard as 'correct' or 'dictionary' language and words themselves, and involves pronunciation transitions from monophthongs to diphthongs (and vice-versa) as substantial factors. These transitions are called respectively diphthongization (pronunciation introduces an additional vowel sound such as a slide or drawl, changing a single sound to a double sound) and monophthongization (a double sound is simplified to a single quicker simpler sound). These features and changes in language are significant in producing the differences in accents when we compare, for example, the dialects of American-English speakers (from various parts of the US) with each other and with UK-English speakers (again in various parts of the UK) and with each other, and with other English speakers. These same features of diphthongization and monophthongization have also been significant in the development of the English language throughout history. Similar effects exist in other languages. dis- - a very common prefix denoting negativity, reversal/inversion, or a disadvantage. discourse - a technical word for a communication of some sort, written or spoken, and often comprising a series of communications. ditto - ditto means 'the same as' (the thing that precedes it), from Latin dictus, said. Ditto is probably most commonly shown as the ditto mark ("), in columns or rows or lists of data, where it signifies 'same as the above'. Where the repetition is an extended row of data or words, several symbols may be linked by long hyphens, or a single symbol may be flanked by two very long hyphens reaching each end of the repeated data, so avoiding the need for a ditto symbol beneath each item/word. dogberryism - a faintly popular alternative term for a malapropism , whereby a similar-sounding word is incorrectly and amusingly substituted in speech, the term being derived from the constable Dogberry character in Shakespeare's As You Like It. double-entendre - a double-meaning or pun, where one of the meanings usually is amusing in a suggestive sexual or indecent way - from old French, double understanding, now 'double entente'). double-meaning - a pun, where a word, phrase or statement can be interpreted to mean two different things, typically where the less obvious meaning is funny, or suggestively indecent or rude in an amusing way. double-negative - this is usually an incorrect grammatical use of two negative words or constructions within a single statement so that the technical result is an expression of the positive, or opposite of what the speaker/writer intends. Usage is commonly associated with regional vernacular inarticulate adults and children, although more complex yet still awkward forms of the double-negative can be found in supposedly expert communications. A common example in everyday speech is, "I don't know nothing.." (which equates to 'I know something'), or "They never did nothing about it.." Separately the double negative is often used simply, or potentially very cleverly, within understatement, or litotes, as a way to emphasize something, and/or to make a humorous or sarcastic comment - for example "That's not bad..." to mean very good. See litotes . dysphasia - a brain disorder due to accident or illness inhibiting speech and/or comprehension of speech. dysphemism - a negative, derogatory, or insulting term, used instead of a neutral (and more usual) one; the opposite of a euphemism . egg corn - a combination of a loose pun and a (usually intentional) malapropism . An egg corn may be written or spoken, designed or notable mainly for humorous effect, in which a word or words are substituted within a term or expression or phrase to produce a different and (typically) related meaning. For example the adaption of 'Alzheimer's disease' to 'old-timer's disease'. The term 'egg corn' is attributed to linguistics professor Geoffrey Pullum, 2003, who apparently drew on an example of the effect in a linguistics blog referring to a woman in the habit of using the term 'egg corn' instead of the word acorn. Other examples of egg corns may be similarly daft, although some are more sophisticated. Often a feature of egg corns is irony . Wikipedia (2013) offers the examples: 'ex-patriot' instead of 'expatriate'; 'mating name' instead of 'maiden name'; 'on the spurt of the moment' instead of 'on the spur of the moment'; 'preying mantis' instead of 'praying mantis'. Business names offer fertile opportunities for egg corns, for (real) example a clothing alterations shop called 'Sew What' ('So What'); a flame grill fast food restaurant called 'Hindenburger' (a darkly ironic reference to the Hindenburg German airship inferno disaster of 1937); a gardener called 'The Lawn Ranger' ('The Lone Ranger'); a sandwich bar called 'Lettuce Eat' ('let us eat'); A Chinese restaurant called Wok and Roll (Rock'n'roll'); an alleyway bookshop called 'Book Passage' ('back passage' - also slang for anus, although this has nothing to do with books per se - it's just an amusing notion); a tennis centre called 'The Merchant of Tennis' ('The Merchant of Venice' - no relevance to tennis or sport at all, just funny); a flower shop called 'Florist Gump' ('Forrest Gump' - no relevance to flowers, merely a daft punning egg corn); a fish and chip shop called 'The Codfather' ('The Godfather', famous movie series, again simply a daft funny pun); a building contractor called 'William the Concretor' ('William the Conqueror'); a hairdressers called 'Cubic Hair' ('Pubic Hair', and also alluding to the cubist art movement); a kebab restaurant called 'Pita Pan' ('Peter Pan' and also alluding to a cooking pan); a furniture store called 'Sofa So Good' ('so far so good'); a chip shop called 'Lord of the Fries' ('Lord of the Flies', William Golding's best-selling 1954 novel, and absolutely no connection with fish and chips). The slang money term 'sick squid' ('six quid') is an egg corn, from which the term 'squid' meaning quid (£ pound) derived. elision - the omission of a sound or syllable in the speaking of words, such as don't, won't, isn't, I'm, you're, etc. The usual pronunciation of the word 'wednesday' as 'wensdy' is elision. The use of glottal stop is also often elision too, as in the cockney/ estuary English pronunciation of 'a pint and a half' as 'a pi'n'arf'. Elision is a common feature of contractions (shortened words). ellipsis - missing word or words in speech or text, for example 'Keep Off Grass', (here 'the' is omitted for reasons of space/impact). Ellipsis may be used for various reasons, for example: omitted irrelevant sections of a quoted passage, usually indicated by three dots, to show just the meaningful sections, for example "...positive economic factors... resulting in substantial growth..."; or in speech/text due to casual or lazy or abbreviated language, for example 'Love you' where the 'I' is obvious/implied, or "Parking at own risk" instead of the full grammatically correct "Parking is at customers' own risk". Another common reason for ellipsis is where surrounding context enables words to be omitted that might otherwise seem unnecessary/repetitious, such as in listing items/activities, for example in the descriptive passage: "He packed shoes, socks, shirts, ties. A blazer. Cufflinks. Some silk handkerchiefs. And cologne." Here the ellipsis creates the dramatic effect of packing items into a case thoughtfully in different actions, rather than (the full arguably more grammatically correct, but clumsier and less dramatic/prosaic, continuous flowing version): "He packed shoes, socks, shirts, and ties. He also bought a blazer, cufflinks, some silk handkerchiefs, and cologne." The word ellipsis is from Ancient Greek elleipein, meaning 'leave out'. emphasis - loosely equating to stress in pronunciation of words and syllables, and separately applying more broadly to the different intonation and volume given by speakers to certain words or phrases in a spoken passage so as to add impact, attract attention, prioritize, etc. Emphasis is commonly signified in printed communications by emboldening or italicizing or highlighting the text concerned. Dictionaries and other language/pronunciation guides usually indicate which syllables in words are to be emphasized or stressed by inserting a single apostrophe before the syllable concerned. epistrophe - repetition of a word or word-series at the end of successive clauses or sentences, used for emphasis and dramatic effect, especially in speeches and prose, for example as used by Abraham Lincoln in his Gettysburg Address, "... this nation, under God, shall have a new birth of freedom - and that government of the people, by the people, for the people, shall not perish from the earth.." The effect is also called epiphora. The counterpart of anaphora , which uses repetition at the beginning of sentences/clauses. epitaph - a phrase or other series of words which is written to commemorate or otherwise be remembered and associated with someone who has died, for example as commonly appears on a tombstone. The comedian Spike Milligan wrote his own famously amusing epitaph: 'I told you I was ill.' epithet - an adjective or phrase which is generally considered, or would be recognized, as characterizing a person or type or other thing, by using a word or a very few words which convey the essence or a chief aspect of the thing concerned. An epithet seeks to describe somebody or a group or something in an obviously symbolic and very condensed way. For example little noisy dogs are commonly referred to by the epithet 'yappy'. The epithet 'tried and trusted' is commonly used to refer to methods and processes which are long-established and successful. The epithet 'keen' is often used to refer to a person who is particularly enthused, determined and focused, and typically strongly motivated towards a particular action or outcome. The epithet 'green and pleasant land' is often used to refer to England. From Greek epi, upon, and tithenai, to place. eponym - a name for something which derives from a person's name, or from the name of something else, for example biro (after Laszlo Biro, inventor of the ballpoint pen), atlas (after the Greek mythological titan Atlas, who held the world on his shoulders), Mach (the measurement unit and earthly speed of sound, after Ernst Mach). The descriptive term for an eponym is eponymous . An eponymous name is therefore one which is named after someone/something. The term derives from Greek epo, meaning 'upon'. estuary english - the dialect and speech style associated with people from London and surrounding areas, especially Essex and Kent conurbations close to the Thames river estuary, hence the name. This is a relatively recent term and an attempt by certain media and commentators to attach a name to the accent of the Greater London area, as distinct from cockney . etymology - the technical study/field of word origins, and how words change over time, or specifically the history of a word, originally from Greek etumos, true. etymon - a word or morphene from which a later word is derived. euphony/euphonic - this refers to the pleasant nature of speech and vocal sounds and is a highly significant aspect in the development of language. This is because language evolves according to its quality as well as its meaning. Words and sounds that are pleasing to the ear and to our unconscious responses tend to be preferred and used more than language whose sounds (and efforts in producing the sounds) displease the speaker and listener (called cacophonous ). Also euphonic sounds flow more smoothly and so enable easier more satisfying communications. The expression 'easy on the ear' actually has very deep significance. Languages evolves like living things; the best and fittest word sounds thrive and endure and continue to adapt positively. The unfit and awkward sounds struggle for long-term acceptance and popularity. Clear examples of the positive influence of euphony are found in the popularity of reduplicative words, and in alliterative phrases, and in poetry, which are easy and pleasing - euphonic - to say and hear. Avoid confusing euphony and cacophony with the meaning of words. Euphony and cacophony refer to sound and ease of utterance, not to meaning. Words which carry extremely ugly or offensive meaning are often amazingly euphonic. In fact most offensive words are very euphonic indeed - they are easy to say and phonically are pleasing on the ear (although it is vital to ignore meaning when considering this assertion). This is a major reason that offensive words thrive and remain so popular - people love to say them. Contrast this with 'difficult' words such as long chemical names, which have been constructed technically by scientists and engineers, rather than having evolved over hundreds of years. Such words are rarely euphonic - they are awkward and unnatural, and so they remain obscure. This is why we will always prefer to say 'bleach', rather than 'sodium hypochlorite'. It's not a matter of word-size - it's that 'sodium hypochlorite' is cacophonous, whereas 'bleach' is sublimely euphonic. In fact 'sodium' is actually very euphonic (it's an old word), but 'hypochlorite' is ugly sounding and very awkward to say, so it will therefore 'never catch on'. Conversely when we say that words 'trip off the tongue' this is a metaphorical expression and instinctive appreciation of euphony, and also of euphony's significance in affecting the way we speak and the way in which languages develop. exonym - a placename which foreigners use and which differs from the local or national name. from Greek exo, meaning outside. expression - an expression in language equates loosely and generally to a cliche , or separately the term expression/express refers to a communication of some sort, for example 'an expression of horror', or 'John expressed his surprise'. euphemism - a positive/optimistic/mild word or phrase that is substituted for a strong/negative/offensive/blunt word or phrase, typically to avoid upset or embarrassment (either for communicator and/or audience), or used cynically to mislead others, often to avoid criticism. For example: 'collateral damage' instead of 'civilian casualties/deaths' in justifying military action; or 'the birds and the bees' instead of 'sex' in sex education; or 'downsizing' instead of 'redundancies' in corporate announcements; or 'negative growth' instead of 'losses' or 'contraction' in financial performance commentary. Death and dying are usually expressed in a euphemism, for example, 'passing away'. Heaven is arguably a euphemism for what happens after death. Euphemisms are very common in referring to sexual matters and bodily functions, due to embarrassment, real or perceived. Hence terms such as 'making love', and words like poo, wee, willy, bum, etc. Some euphemisms are appropriate, others are or disingenuous. Where there is honest intention to avoid causing offence or upset in sensitive human situations, euphemisms are usually appropriate. Where a politician or business person uses euphemistic language to avoid responsibility, blame, etc., then euphemisms are cynical and dishonest. The inverse or opposite of a euphemism is a dysphemism . figurative - in language the term figurative refers to the non-literal use of words, equating to the symbolic or metaphorical representation of concepts, thoughts, things, ideas, feelings, etc. The term figurative is very broad and can potentially mean any use of descriptive language which is not factual. Figurative types of description include similes , metaphors, exaggeration, or any other descriptive device which distorts the strict technical meaning of the words used. figure of speech - a figure of speech is a symbolic expression ; 'figure of speech' is a very broad term for a word or series of words used in writing or speech in a non-literal sense (i.e., symbolically), which may be a cliche or metaphor or simile , or another expression which represents in a symbolic way a concept or feeling or idea or some other communication. A figure of speech may be a popular and widely used expression, or one that a person conceives for a single use. There are very many thousands of figures of speech in language, many of which we imagine wrongly to be perfectly normal literal expressions, such is the habitual way that many of them are used. font - nowadays the word font has a broader meaning than its original or traditional meaning: font used to refer to a specific size and style of a typeface (typeface being a font family, such as Times or Helvetica, including all sizes and variants such as bold and italic, etc). In modern times font tends more to refer to an entire font family or typeface (such as Times or Helvetica). The word font is derived from French fonte and fodre, to melt, referring to the making of lead type used in traditional printing. former - this is a quite an old technical formal writing or speaking technique: former here refers to the earliest of a number of (usually two) items mentioned in a preceding passage of text/speech. Its sister word is latter, which refers to the last (usually second) item mentioned in a preceding passage of text. An example in use is, '...There was a problem involving the keys and the house, when the former were locked inside the latter...' The usage typically aims to avoid unnecessary or clumsy repetition, although with declining use, and correspondingly increasing numbers of people who have not the faintest idea what former and latter mean in this context, the merits of the methodology are debatable. See latter . generic - the word generic refers to a class or category or group of things - it is a flexible and relative concept. Generic might otherwise mean 'general' or 'broadly applicable' (in relation to something which belongs to a class or set, which basically everything does in one way), or describe 'similar items/members'. Its usage normally seeks to differentiate a broad sense from a specific sense. Generic is the opposite of specific or unique or individual. More technically generic refers to classes of things in formal taxonomy or classifications. The word derives ultimately from Latin genus, meaning stock or race. genericized trademark/generic trademark - a word which was (and may still be) a brand name that is used in a general or generic sense for the item or substance concerned, irrespective of the brand or manufacturer, for example Aspirin, Velcro, Hoover, Sellotape, Durex, Li-lo, Bakelite, Zippo, Coke, etc. Many genericized trademark names have entered language so that people do not appreciate that the word is/was a registered and protected brandname. There are surprisingly very many such names. Corporations and other owners of genericized trademark names typically resist or object to the effect, because legally the 'intellectual property' is undermined, and its value and security as an asset is lessened (which enables competitors to sell similar products). There is however a powerful contra-effect by which owners of genericized trademarks potentially command a hugely serious and popular reputation, which can be used to leverage lots of other benefits and opportunities if managed creatively and positively. It is, as the saying goes, 'a nice problem to have'. See a long list of genericized trademarks in the business dictionary . gerund - a verb used in the form of a noun , typically by using the 'ing' suffix, for example 'when the going gets tough' (going being the noun) or 'it's the screaming and wailing that upsets people' (both screaming and wailing here being gerunds). Originally from Latin gerundum, which is the gerund of the Latin verb gerere, to do. gerundive - a verb used in the form of an adjective, with the meaning or sense of '(the verb) is to be done'. Gerundive constructions do not arise in English as gerunds do, but they appear in words that have entered English from Latin, often ending in 'um' for example 'quod erat demonstrandum' ('which was to be demonstrated' - abbreviated to QED, used after proving something). Interestingly the name Amanda is a (female) gerundive, meaning '(she) is to be loved'. The words referendum, agenda, and propaganda are all from Latin gerundive words, which convert a verb into an adjective with the meaning of necessity to fulfil the verb. glottal stop - a consonant sound produced by blocking exhaled airflow (when voicing vowel sounds) by sudden closure of the vocal tract, specifically the folds at the glottis (the opening of the vocal chords), and which may be followed by an immediate reopening of the airflow to enable the word to continue. Glottal stops may therefore happen at the ends of words or during words, for example in cockney and 'Estuary English' (a dialect of Greater London and communities close to this) where in English they typically replace a formal letter sound, commonly a 't', which is then referred to as a 'dropped' letter. The glottal stop, while extremely common in speech, is not formally included in the English alphabet, but is included in certain foreign languages, notably in Arabic nations. glyph - a single smallest unit (symbol) of meaning in typographics (writing/printing symbols), i.e., a symbol whose presence or absence alters the meaning of a word or longer communication. All letters are glyphs. Diacritical marks are generally regarded as glyphs. Increasingly computer symbols are regarded as glyphs. A dot above an 'i' or 'j' has traditionally not been considered a glyph in English, although is a glyph in other languages where a dot alone has an independent meaning. -graph - a common suffix which refers to a word or visual symbol, or denotes something that is written or drawn or a visual representation, for example as in the words autograph, photograph, etc. From Greek graphos, meaning written, writing. grapheme - the smallest semantic (meaning) unit of written language, equating loosely to a phoneme of speech. Graphemes include alphabet letters, typographic ligatures, Chinese characters, numerical digits, punctuation marks, and other individual symbols of writing systems. hash - also called the 'number sign' (#), and in US/Canada and nations using US vernacular the 'pound sign', since it refers alternatively to the UK £ (sterling currency) symbol. The hash/pound symbol generally appears bottom right on telephone keypads and is significant in confirming many telecommunications and functions. The hash symbol has also become significant in computerized and internet functionality and data organization, as notably in the 'hashtag' . hashtag - a hashtag is the use of the hash (#) symbol as a prefix for an identifying name relating to content or data of some class or commonality that may be sorted or grouped or analyzed, most famously in modern times on social media websites such as Twitter. In fact the use of the hash symbol for computerized sorting and analysis purposes first began in Internet Relay Chat Systems, first developed in the late 1980s. The hashtag is a major example of the increasing simplification, streamlining, coding and internationalization of language, and especially to this end, of the integration of numbers and symbols within words and letters and electronic communications to increase speeds of communicating and accessibility, and to reduce the quantity of characters required to convey a given meaning, and also to organize and distribute communications-related data. hendiadys - a sort of tautology which for dramatic effect or emphasis expresses two aspects or points separately rather than by (more obviously and efficiently) combining them, for example: "The rain and wet fell incessantly..." holonym - a whole thing in relation to a part of the whole, for example the word 'car' is a holonym in relation to 'wheel' or to 'engine'. From Greek holon, whole, and onuma, name. heteronym - heteronym refers to each of two (or more) words which have the same spelling but quite different meanings, for example key (to a door or lock) and key (in music). Where the sound is different such words are also called heterophones . Where the sound is the same such words are also called homonyms . Additionally and differently heteronym refers to single words which are quite different but mean the same, either due to geographical differences, for example fender and bumper (the US/UK-English words for protective construction front/rear of motor cars, etc), or due to different etymology , for example settee and sofa, or dog and hound. From Greek, heteros, other, and the suffix ' onym ', which refers to a type of name. heterograph - a less common term than and equating to a heteronym , i.e., one of two or more words with the same spelling, but different meaning and different origin, and may be pronounced the same or differently. heterophone - this is a heteronym that is pronounced differently to its related words, (i.e., the other word[s] which cause each to be a heteronym). From Greek heteros, other, and phone, sound or voice. Examples of heterophones include entrance (entry, and put someone in a trance), row (row a boat, and row meaning argue), wind (a wind that blows, and wind up a clock). heteronym - one of two or more words with the same spelling, but different meaning and different origin, and may be pronounced the same or differently. Each word looks the same as the other but has quite a different meaning. A heteronym is a kind of homonym, and equates to a heterograph. From Greek hetero, other. For example sewer (stitcher/water-waste pipework), bow (made with ribbon/bend from the hips) row (argument/propel a boat). homo- - a common prefix meaning 'same', from Greek homos, same. homonym - homonym refers to each of two (or more) words with the same pronunciation or spelling, but different meanings and etymological origins, for example the word 'mean' (unkind or average or intend, for which each 'mean' is quite differently derived), or the words flower and flour. A homonym involving the same spelling is also called a heteronym . A homonym which involves different spelling is also called a homophone . Homo is a prefix from the Greek homos meaning same. homograph - one of two or more words which have the same spelling but different meanings, and usually different origins too. homophone - a word which sounds like another but has different meaning and spelling, for example flour and flower. heteronyms, heterophones, heterographs, homonyms, homophones, homographs - explanatory matrix Note that the definitions of these terms contain many overlaps and common features. Linguistics experts may disagree over precise certain finely detailed differences. |heteronym||different||d or s||same||different||key (music)/key (lock)| |heterophone||different||different||same||different||entrance (entry)/entrance (hypnotise)| |heterograph||different||d or s||same||different||key (music)/key (lock)| |homonym||different||same (or)||(or) same||different||mean (intend)/mean (unkind)/mean (average) - flower/flour| |homophone||different||same||different||different||weigh/way - write/right - flower/flour| |homograph||different||d or s||same||d or s||entrance (entry)/entrance (hypnotise)| (N.B. It can be helpful to a small degree in understanding the confusing relative meanings and overlaps of these terms, to remember that 'phone' refers to sound, 'nym' refers to word/name, and 'graph' refers to spelling - I say 'to a small degree' because even given this knowledge the confusion remains challenging to resolve completely, so some caution is recommended in using any of these terms in an absolutely firm sense.) hypo-/hyper- - these two common prefixes mean respectively (loosely) 'over/above' and 'under/below', from their Greek origins, huper (over) and hupo (under). Remembering these two simplex prefixes will help the understanding of hundreds of different terms. hyperbole - exaggeration or excessive description, used for dramatic effect, or arising from emotional reactions, rather than for accuracy or scientific reasons. For example, 'I am so hungry I could eat a horse...' or 'I've told you a million times...' From Greek huper, over, and ballein, thrown. hypernym/hyperonym - interestingly we use these words every day, and understand their meaning and positioning, but probably don't realize what they are called technically, i.e., a hypernym is a category or group name within which different types or sorts exist, or a general term within which more specific different type terms exist. For example, 'bird' is a hypernym (group name) in relation to 'sparrow', 'eagle', and 'pelican' (which are hyponyms of the 'bird' group or hypernym). In turn 'animal' is a hypernym for 'bird' which is a hyponym of 'animal. In turn 'creature' is a hypernym of 'animal'. All hyponyms may accurately be called also the name of their hypernym, but not vice-versa, for example every hammer (hyponym) is a tool (hypernym), but not every tool is a hammer. Hypernym is from Greek huper, over, beyond. A hypernym is also called a superordinate or generic term. hyponym - this is a sister term (or more precisely a daughter term) to hypernym and refers to something which is in a category of some sort, for example 'sparrow', 'eagle', and 'pelican' are all hyponyms in a category named 'bird' ('bird' is the hypernym in relation to the stated hyponyms). A hypernym word may always correctly be referred to as the hypernym word (for example 'golf' is a 'game', as is every other hyponym of 'game') - but the same does not apply in reverse, (i.e., a 'game' is not always 'golf'). Every word in the language is a hyponym, because every word refers to something which is part of a group of some sort. Hyponym is from Greek hupo, under, which is a good way to remember that hyponyms are 'under' a hypernym. A hyponym is also called a subordinate term. -i - 'i' is an increasingly commonly seen prefix denoting 'internet' and suggestive of connectivity and functionality associated with internet technologies. I am open to suggestions of when the i prefix was very first used in this way. The Apple corporation could claim the first globally dominant usage. Apple has many trademarks covering the use of the i prefix (notably iPhone, iTunes, iPad, iPod). According to reports, the Apple TV was to be called the iTV until UK broadcaster ITV (Independent Television) objected/threatened legal action. icon - a symbol representing something - icons are increasingly becoming highly significant elements of modern communications, to the extent that we can imagine alphabets of the future comprising many icons, just as they will have to accommodate numbers and other symbols, alongside traditional letters. See icon in the business dictionary. idiom - a word, or more usually words, which through common use have developed a recognizable figurative meaning, so as to refer to or describe something in symbolic non-literal terms. Idioms may be widely recognized, or understood just by a small group, for example by virtue of locality or common interest. Languages are full of idioms; many cliches are idioms, as are many similes and metaphors too. An idiom is generally an expression which is popularly used by a group of people, as distinct from a figurative expression created by an author or other writer for a single use within the created work, which does not come into more common use. Idioms commonly feature in the dialect of groups defined by geography or culture. The word idiom derives from Greek idios, 'own' or 'private'. i.e. - a commonly used abbreviation of the Latin term 'id est', meaning 'that is', for example when offering a clarification or explanation of, or a listing related to, the directly preceding reference or point. In most usage the full meaning of 'i.e.' is effectively 'that is to say..', for example: 'His travels took him to the capital cities of England, France and Portugal, i.e., London, Paris and Lisbon..' Or: 'Nowadays people use to many detergents and other chemicals to clean things, when much of the time the only cleaning product required is the "universal solvent", i.e., water'. inflection - also spelled inflexion - in linguistics inflection refers to tonal or pitch alteration or modulation of the human voice, or in grammar to the alteration of a basic word ( lexeme ) - its ending or beginning or spelling - to change tense, gender, mood, person, voice (whether gramatically active or passive, i.e., diathesis ), number, gender and case. The inflection of verbs is called conjugation , and the inflection of nouns/adjectives/pronouns is called declension . intellectual property - often abbreviated to IP, 'intellectual property' is a widely used legal term referring to created works such as writings, artworks, brandnames, designs, music, inventions, etc., which may be recorded and officially registered in some way, and which may not be copied or exploited without approval or licence or other permission from the ' rights-holder '. Implicitly, intellectual property commonly has a commercial value, which while relatively 'intangible' may (in the case of popular brands and mass-produced products) be considerable and stated in official financial accounts. Normally intellectual property would be registered in some way to improve protections and awareness of existence/ownership, aside from the natural copyright existing in any original created work. Examples of registered intellectual property are: patented inventions, designs, brandnames and trademarks, books, poetry, photographs, sculptures, processes and systems, software, written and recorded music. Different registration bodies exist for different types of work and different geographical territories.International Phonetic Alphabet (IPA) - a major and widely used phonetic alphabetic system, devised by the International Phonetic Association as a way to represent vocal language sounds. The alphabet's most obvious purpose is to show how words and letters are pronounced. The IPA is used by technical and professional linguists and lexicographers, and others involved in the study and teaching of spoken language. Its representations of words appear alongside most entries in many dictionaries of languages which use the Latin alphabet. The IPA is an extremely vast system, comprising (at revision in 2005) 107 letters ( consonants and vowels ), over 50 diacritics and other signs indicating length, tone , stress, and intonation of word/letter sounds. Given that the diacritics and the other modifying signs may be used in various combinations with the letters this produces potential for many thousands of different sounds. Places of articulation explains where in the mouth and vocal tract these sounds are produced. The image right is linked to a much clearer PDF of the International Phonetic Language (2005) . The png image and PDF chart are published here according to the following reproduction permission: (IPA Chart, https://www.langsci.ucl.ac.uk/ipa/ipachart.html, available under a Creative Commons Attribution-Sharealike 3.0 Unported License. Copyright © 2005 International Phonetic Association.) irony/ironic - in language irony refers to the use of words which intentionally contain a meaning or interpretation which is quite different, or opposite, to the literal or apparent meaning of the words or statements themselves. Irony is a difficult concept for some people to appreciate, partly because it entails quite a deep understanding of context and attitude of the writer/speaker. Where irony is interpreted 'at face value', or according to the initial apparent obvious meaning, the reader/listener derives a false impression of meaning, which may wrongly suggest that the writer/speaker and his/her communication is insulting or foolish. Irony is similar to sarcasm , although covers a much wider range of linguistic effects, which may act on a deeper and more extensive level. For example the entire nature of a character, or plotline, or situation in a story may be ironic, whereas the concept of sarcasm is essentially limited to the tone of communications. Also, irony may be used for various effects such as comedy, dramatization, pathos, etc., whereas sarcasm tends to be used for quick humour, negative observations, insults, denegration, and angry comment. Janus word - an auto-antonym - i.e., one of two words with the same spelling but opposite meanings, such as fast (firmly fixed and moving quickly). So called because the Janus, Roman god of beginnings, transitions, gates, passages, etc., is traditionally depicted with two faces, representing looking both to the future and past at the same time. Janus, incidentally, is also the derivation of January, in the sense of a beginning or doorway to the new year. juncture - in linguistics a juncture is the manner in which two consecutive syllables or words are connected (mainly audibly), so as to differentiate the sounds of the words and thereby enable the entire meaning of the construction. A juncture between syllables and words effectively avoids everything merging into a continuous stream of meaningless sounds. The movement of juncture in words and phrases sometimes produces alternative (amusing, clever, etc) meanings, which effect is called an oronym . juxtapose/juxtaposition - to juxtapose (two ideas, concepts, points, etc) means to put or express two different or contrasting things together for emphatic or dramatic effect. A juxtaposition is the result or act of doing this. For example, (the image or description of) a homeless person begging on the street outside Buckingham Palace would be a juxtaposition. The expression 'take it or leave it' is a very simple juxtaposition. A juxtaposition commonly exaggerates or produces a competing effect, where in reality the two 'competing' items may not actually conflict with each other, or be a stark 'one or the other' choice. A juxtaposition may be used for entertaining and uplifting purposes, as in poetry, drama, movies, etc., or for more negative cynical manipulative purposes, as in politics and marketing. Latin - the language of ancient Rome and widely used still as a language of scholarship, astronomy, administration, law, etc. Latin is one of the fundamental root languages of European language development, specifically of the many 'Romance' languages, notably including Spanish, Portuguese, French, Italian, and Romanian. Latin, chiefly via French, had a significant influence in the development of the English language. The conventional English alphabet (along with those of the Romance languages) is known as the Latinate alphabet, because its origins are in ancient Latin. Many Latin terms survive in day-to-day English language, especially related to business, technical definitions, law, science, etc. latter - the last item in a list or the second of two points. See former . leet - leet, also known as eleet or leetspeak, is an alternative alphabet for the English language that is used primarily on the Internet. It uses various combinations of ASCII characters to replace Latinate (standard English writing) letters. The leet word for leet is I337. Here is an extensive example of leet-style language . lexeme - the basic form of a word, without alteration for verb tense or other inflection . Most words in dictionaries tend to be lexemes. Examples of lexeme forms are run, smile, give, boy, child, blond; whereas inflections of these lexemes include for example: runs/ran/running/runner, smiles/smiled/smiling/smiley, gave/giver/given, boys/boyish, children/childish, blonde/blondes/blonder. ligature - in typographics and writing a ligature is an unusually joined form of two letters or other typographical characters, for example the ampersand . 'Unusually' here refers to a joint which is not typical in handwriting. Typographical folk do not universally agree which jointed forms qualify technically as ligatures, for example the forms æ and œ, which are regarded now by some as as single vowels/symbols in their own right, rather than jointed as they historically have been. Such a disqualification for these and similar double-letter forms would incidentally also render the term diphthong inappropriate, given the definition of that term. literal/literally - originally and technically literal/literally refers to the use of language so that it (the expression or statement, etc) means exactly what the words state, i.e., there is no exaggeration or metaphor or symbolization in the language, and therefore the words should be taken as a clear and truthful expression of fact. In informal and recent use however (late 1900s onwards), the term 'literally' is used widely (and arguably very incorrectly) to express precisely the opposite, i.e., that the figure of speech concerned is figurative or symbolic or (commonly) highly exaggerated and far different from the actual truth. For example: 'I told him literally millions of times ...' or 'He was so angry that smoke was literally coming out of his ears...' This is an example of 'incorrect' usage becoming 'correct' by virtue of popular usage. In this respect the term is potentially highly confusing, since the term 'literally' may mean in common use either that something is completely factual and true, or instead that something is highly exaggerated or distorted. The listener/reader/audience must decide. Usually the statement itself, context, situation and speaker/writer collectively indicate whether the term 'literally' is used in its original technical sense (i.e., factual/actual) or its later wide informal sense (i.e., symbolic/metaphorical/exaggerated). Statements such as: 'I was literally sweating buckets,' and 'I was literally climbing the walls in agony,' are obviously metaphors and so are not technically 'literal' and factual, whereas the statements: 'Our flight was delayed for literally a whole day,' and 'I literally hung my head in shame,' could quite conceivably be technically 'literal' and factual. The term 'literally' is perhaps prone to confusion given the similar words 'literature' and 'literary', whose meaning quite correctly encompasses symbolic and figurative writing (in books, poetry, plays, etc). Whatever, the original technical meaning derives from the Latin equivalent 'litteralis', in turn from litera, meaning 'letter of the alphabet'. litotes - the use of understatement to give emphasis, typically to the opposite meaning (i.e., it's actually an ironic subtle way to make an overstatement or exaggeration), and often in a humorous way, especially but not necessarily also the use of the 'double-negative' - for example "that's not bad.." in referring to something that is considered very good, or "not half.." to emphasise an expression of 'wholly' or 'fully' or 'very'. Many examples of litotes have entered common speech so that we don't think about them as understatement. For example: "I won't be sorry.." (meaning I will be glad); "Not the sharpest knife in the drawer.." (meaning dull-witted); "Not the fastest.." (meaning very slow or the slowest); "I was just a little hungry.." (meaning I was starving); or "I know a little bit about.." (meaning I know a great deal about..). The word litotes is from Greek litos meaning plain or meagre. Litotes is a form of sarcasm . Litotes is traditionally also called meiosis . -logue - shortened in US-English to log, logue is a suffix which denotes a type of discourse , i.e., a communication, and often a series of spoken or written communications, for example as used in catalogue, dialogue, monologue, prologue, analogue, etc. From Greek logos, word or reason. malapropism - the incorrect substitution of a word by a similar-sounding word, usually in speech and with amusing effect, often used as a comedic device in light-entertainment TV shows and other comedy forms. The term derives from a character called Mrs Malaprop in Richard Brinsley Sheridan's 1775 play called The Rivals, whose lines frequently included such mistakes. Other writers, notably Shakespeare, earlier made use of the technique without naming it as such. Lord Byron in 1814 is said to have been the first to refer specifically to a malaprop as a mistaken word substitution. The term is far less popularly called a Dogberryism, after the watchman constable Dogberry character in Shakespeare's As You Like It, who makes similar speech errors. matronym - a name derived from a mother or female ancestor. From Latin mater, mother. Also called a metronym. See also patronym . meta- - an increasingly common prefix referring to the use of replacement or 'hidden' forms (words, language) instead of what is normally visible or openly accessible. The increasing frequency and popularity of the 'meta-' prefix in language is substantially due to the computer age, by which so many forms of communications are coded, or accompanied by hidden processes/date/etc. Meta is Greek for with/across/[named] after. meta-message - the underlying or real or hidden meaning of a communication or information/data/presentation, as distinct from the message initially taken and most obviously seen in the communication. See meta prefix . meiosis - traditionally equating to litotes - i.e., intentional sarcastic/humorous understatement, which often includes the use of double-negative, (for example, "That's not bad..." meaning very good) to emphasize or refer ironically to the impressive nature of something, by suggesting the opposite. Meiosis is a late-medieval English term, originating 1500s, from Greek, spelt and meaning the same (meiosis = understatement), from meion, meaning less. metaphor - a word or phrase which is used symbolically to represent and/or emphasize another word or phrase, typically in poetic or dramatic writing or speech, for example, 'his blood boiled with anger', or 'his eyes were glued to the screen in concentration'. A metaphor is similar to a simile , except that a simile uses a word such as 'as' or 'like' so as to make a comparison, albeit potentially highly exaggerated, whereas a metaphor is a literal statement which cannot possibly be true. 'The criticism felt like he was drowning in a flood...' is a simile, whereas, 'The criticism was a drowning flood...' is a metaphor. Meta is Greek for with/across/[named] after, hence the Greek translation/derivation of metaphor, metaphora, from metapherein, to transfer. metasyntactic - a technical description referring to the use of replacement words in language when for whatever reason the actual word(s) cannot be identified, either through lack of time, care, knowledge, or permission, etc. See Meta prefix. And syntax . See also placeholder names . meronym - simply a meronym means 'part of', for example, a window is a meronym in relation to a house, and a hammer is a meronym in relation to a toolkit. More specifically a meronym is a word technically referring to a part of something but which is used to refer to the whole thing, for example: 'All hands on deck' (in which 'hands' are a part of each crew member yet the word is used, as a meronym, to refer to the crew members), or 'Feet on the street' (in which 'feet' is a meronym for the people, who are on the street'). From Greek meros, part, and onoma, name. Meronym is the opposite of a holonym (a whole thing in relation to a part of the whole). metonym - word/phrase used to represent the function with which it is associated - similar to a metaphor - for example the term 'Number Ten' is a metonym for the UK Prime Ministerial office and authority (by association with the address of the office at 10 Downing Street). 'The bottle' is a metonym for alcohol; 'the Crown' is a metonym for the monarchy; 'Brussells is a metonym for the EU's institutions; '(there will be) tears' is a metonym for (predicted) emotional upset; 'Twickenham' is a metonym for the England Rugby Football Union; 'the noose' and 'the chair' are metonyms for capital punishment; 'under the knife' is a metonym for surgery; 'shut-eye' is a metonym for sleep, etc. From Greek, metonumia, 'change of name'. metronym - a name derived from a mother or female ancestor. More usually called a matronym . The expression 'Mother Earth' is perhaps the most fundamental universal example of all. More narrowly, any female child is given a metronym/matronym when named after a mother, grandmother or other female in the ancestral line. misnomer - an inaccurate or incorrect term, name or designation, especially when established in popular or official use, although a misnomer may also be a simple once-only error of referencing or naming something. There are many different types/causes of misnomers. Some misomers originate first as correct and accurate terminology but then become misnomers because the meaning of language alters subsequently over many years. The 'ring' of a telephone is a misnomer because telephones no longer contain bells. When people refer to 'pulling the 'chain' in referring to flushing a lavatory this is also a misnomer because lavatories generally no longer have chain-pull mechanisms. The Indian food 'Bombay duck' is a misnomer because it is actually a dried fish. A 'contradiction in terms' or oxymoron may also be a misnomer. Genericized trademarks are misnomers. Misunderstood scientific phenomena aften produce misnomers, such as the term 'shooting star', which technically are meteors. So too is 'thunderbolt' a misnomer, because it's actually a representation of a lightning strike. The 'lead' of a pencil is a misnomer, because it is graphite. When we suggest that someone will 'catch a cold' by not wearing enough clothes in winter this is a misnomer because a cold is a virus and cannot be 'caught' from or produced by cold weather. Many creatures are named as misnomers, due to inferring a species by similarity of appearance, for example, a 'king crab' is not a crab, a 'koala bear' is not a bear, and a 'prairie dog' is not a dog. Changes in legal terminology can also produce misnomers, for example it is a misnomer to refer to sparkling wine as 'champagne' when it does not come from the Champagne region in France. The term 'football club' is a misnomer where in most cases the 'club' is a commercial company. There are thousands more misnomers in common use, and commonly people don't appreciate that the terms are technically quite wrong. A misnomer should not be confused with a metaphor , which is an intentionally symbolic term for dramatic effect. mnemonic - a 'memory-aid' for a particular thing (rule, process, concept, theory, etc., or task or mental note). Examples of types of mnemonics include acronyms (including 'bacronyms' ) stories , quotes , etc., and the old practice of tying a knot in one's handkerchief (reminding the owner that he/she should remember something). The word mnemonic is pronounced 'nemonic' and is commonly misspelled ('numonic'). It's from Greek mnemon, mindful. The study of the development and assistance of memory is called mnemonics or mnemotechnics. See more about mnemonics in the business dictionary . modal verb - an additional verb which expresses necessity or possibility from the standpoint of the writer's/speaker's belief or attitude, namely the verbs: must, shall, will, should, could, would, can, may, might. modality - an aspect of language which expresses necessity or possibility from the standpoint of the writer's/speaker's belief or attitude. See also mood . modulation - in linguistics modulation refers to a change of pitch in the voice. mondegreen - a misheard and wrongly interpreted word or phrase, from a published or quoted passage of text (obviously heard not read), especially in song lyrics, poetry, dramatic speech, etc. The effect is very close to, or may actually be in some cases defined as, an oronym . There is some overlap also with the notion of an egg corn (which equates to an intentional malapropism and pun hybrid). The term mondegreen was suggested by US writer Sylvia Wright in a 1954 Harpers Magazine article 'The Death of Lady Mondegreen', in which she referred to her own long-standing mistaken interpretation: 'And Lady Mondegreen' instead of the actual 'And laid him on the green' (being the last line of the first stanza from the 17th-century Scottish ballad, 'The Bonny Earl O'Moray'). Mondegreens commonly arise in song lyrics because the art form is one which ordinarily contains lots of weird words and phrases anyway, and so the imagination requires very little stretching to accept even quite ridiculous misinterpretations. Popularly referenced mondegreens include the following (and amusingly the first two examples are said to have been encouraged by the singers themselves who on occasions intentionally sang the mondegreen instead of the correct lyrics during live performances): - 'There's a bathroom on the right,' instead of 'There's a bad moon on the rise,' in Creedence Clearwater Revival's 'Bad Moon Rising'. - 'Excuse me while I kiss this guy,' instead of 'Excuse me while I kiss the sky,' in Jimi Hendrix's 'Purple Haze'. - 'The ants are my friends,' instead of 'The answer my friend,' in Bob Dylan's 'Blowin' in the Wind'. - 'I'm gonna f*** you,' instead of 'I'm gonna suck you,' in the play-out of T-Rex's 'Jeepster' (although Marc Bolan was arguably not attempting very hard to articulate an S instead of an F, and cynics might suggest that the preceding and somewhat incongruous line 'Girl I'm just a vampire for your love,' was merely a ploy to enable circumvention of the radio and TV censors with a hardly-disguised intentional obscene modegreen). monophthong - a single vowel sound - compared with a diphthong and triphthong . A monophthong is also called a pure vowel, because it is constant and involves no alteration in voicing. See also diphthongization and monophthongization , which is an extremely fundamental aspect of language development across the human race. mora - a somewhat unscientific unit in phonology referring to and determining 'syllable weight' in words, which commonly determines stress or timing. There seems no absolute quantification of a mora, except that one mora is a short syllable and two or three 'morae' represent proportionally longer syllables. The term monomoraic refers to a syllable of one mora. Two morae is bimoraic. Three morae is trimoraic. The word mora is from Latin mora, linger or delay. morpheme - a part of a word which contains a single meaning or specific linguistic purpose, including prefixes and suffixes, and which cannot be divided, for example, single words such as 'to', 'is', 'in', 'on', etc.; verbs such as 'go', 'come', 'take', 'find', etc; nouns such as 'love', 'bread', 'deed', etc; and elements which make up larger word constructions, for example morpheme elements (separated by hyphens) in 'under-hand', or 'over-confident-ly', or 'un-flinch-ing-ly', etc. Morph means form in Greek. The 'eme' suffix derives from Greek phonema, meaning sound/speech, since morpheme follows the same structure as the French-English word phoneme (a differentiating sound in a word). neo- - a word prefix meaning new or revived (notably referring to concepts, ideologies, etc) - from Greek neos, new. neologism - a new word, or (technically, in psychiatry) a made-up word used by a person or child - a neologism is often although not necessarily attributable to a particular originator, and generally is a word very recently, or with the potential to be, introduced/adopted into conventional language and dictionaries (from Greek neos, new, and logos, speech). The word 'google' meaning to search the web using the Google search engine is a type of neologism, based on eponymous principles. The word 'flup' (from 'full-up') is an example of a neologism resulting from contracted abbreviation , as is the word 'pram' (a contracted abbreviation of the original word 'perambulator'). There are many other sorts of neologisms, which are effectively different ways in which new words evolve or become newly established. -ness - a common suffix which typically turns an adjective, or adverb, and sometimes a noun, into a noun which expresses a characteristic or state or measure of something. Obvious examples are words like happiness, sweetness, goodness, darkness, etc. In more modern times the 'ness' suffix is used to make new or made-up slang words, particularly for a specific situation, some of which can be quite amusing, or childish and silly, depending on your viewpoint, such as 'flatness of beer is a problem for drinkers who like froth', or 'over-eating produces a bigness of belly', or 'the workforce frequently suffered with can't-be-botheredness'. The 'ness' suffix originated in old Germanic languages. Other suffixes which achieve a similar effect are 'hood' (as in motherhood), 'th' (as in strength, from strong), and 'ity' (as in nudity). neuter - in language neuter refers to a gender which is neither male or female - from Latin, ne, not, and uter, either. noun - a word which names (is used for) something or someone, and which is not a pronoun . Variants are proper nouns, (a name of particular person or place, usually capitalized, e.g., John, Mary, Earth, Africa, Japan, etc), and noun phrases, which . Nouns other than variants are also called 'common nouns'. From Latin nomen, name. noun phrase - equating functionally to a noun, a noun phrase is two or more words which act as a noun, for example, 'leek and potato soup', or 'some green paint'. A noun phrase may contain aother noun phrases, for example, 'a two-litre pot of green paint', or the best days of our lives', or 'the shops which were open for business during the storm'. A noun phrase may be a subject or object or perform another nounal function in a sentence, for example, 'The touring party from Spain visiting Iceland (noun phrase 'subject') - longed (verb) to (preposition) go (verb) back (preposition) to (preposition) - their homes in the warm sunny countryside (noun phrase 'object').' object - in grammar an object is a noun or pronoun which is governed by a subject in a sentence, for example, 'the cat (subject) sat (verb) on (preposition) the mat (object)', or 'he (subject) kissed (verb) her (object)'. -ology/-logy - a suffix which denotes a subject of study or interest. onomatopoeia - a word or series of words which sounds like what it means or refers to, for example 'bang', 'cuckoo', 'sizzle', 'skating skilfully on ice'. Originally from Greek onoma, name, and poios, making. -onym - the suffix 'onym' is very commonly featured in this glossary - it refers to a type of name, and specifically it refers to a word which has a relationship to another word. It is from the Greek word with the same meaning, onumon, from onoma, name. oronym - a word, or more usually two or more words, which, typically by changing/moving the juncture (joint - pause or emphasis), between words/syllables, or creating a new break in the word, may produce (particularly) audibly a different expression or phrase and meaning. A commonly quoted example is the phrase 'I scream', which by moving the joint may sound instead as 'ice cream', and vice-versa. A well-known amusing example is 'four candles'/'fork handles'. Oronyms enable amusing wordplay with people's names, such as 'Teresa Green/Trees are green' and 'Ben Dover/Bend over', etc. The term oronym is said to have been devised by writer Giles Brandreth in 1980, derived (very loosely indeed) from oral, meaning spoken rather than read/written, although the prefix 'oro' technically and somewhat misleadingly also implies association with the word mountain. Other examples: Beanstalk/Beans talk; New direction/Nude erection, the ironically juxtaposed Therapist/the rapist; and the famously rude: Whale oil beef hooked/'Well I'll be fooked', and even ruder Antique hunt (work it out..). Some oronyms entail correct spellings of the alternative words/phrases, and/or related or ironic meanings, such as manslaughter/man's laughter. Oronyms that are wrongly interpreted from heard song lyrics and poetry, etc., may commonly also be referred to as mondegreens , which has a wider meaning. A popular and highly amusing category of oronyms is found among website domain names (URLs), which accidentally or intentionally contain a (usually rude or inappropriate and ironic) double-meaning, for example the now famous pen website 'penisland.com' (pen island/penis land); a forum for experts 'expertsexchange.com', and various websites dealing with therapy practitioners which use the oronym 'therapist' (therapist/the rapist). Website domain names (URLs) are especially prone to oronymic effect because prime URL convention usually entails phrases without word-spaces. Other amusing apparently (maybe) real examples of website name oronyms include: the Italian energy website 'powergenitalia.com'; the Dutch music festival 'hollandshitfestival.nl', and the laugh-out-loud wonderfully named ring-tones website 'ringtoneshits4u.com'. There are many more. The name 'slurl' (a portmanteau of slur and url) seems to have been devised for these amusing/offensive website oronyms c.2006, by writer Andy Geldman, featuring in his book and website 'Slurls'. orthonym - the real name of someone or something, opposite to a pseudonym . oxymoron - a contradiction in terms , typically contained in a very short phrase or expression, such as (and including some very well-established expressions): accidentally on purpose, alone in a crowd, bitter sweet, controlled chaos, deafening silence, open secret, sweet sorrow, tough love, etc. Oxymorons may also be unintentional and result from confused or rushed thinking/speaking. palindrome - a word or phrase which reads the same backwards as forwards, for example 'madam', 'nurses run', and 'never odd or even'. Palindromes tend to become increasingly daft and nonsensical with greater length, for example, 'Was it a car or a cat I saw?', or 'Eva, can I stab bats in a cave?', and 'Mr Owl ate my metal worm', and 'Do geese see God?' Generally palindrome phrases do not require that punctuation is reversible too. Palindrome may also refer to reversible numbers, notably numerical dates, for example 31.3.13 (UK date format). pangram/perfect pangram - a pangram is a sentence containing every letter of the alphabet - typically a short one used in testing or demonstrating text-based communications equipment, material, typefaces, etc. Alternatively called a 'holoalphabetic sentence', the most famous and early English example is: 'The quick brown fox jumps over the lazy dog', at 35 letters (which can be shortened to 33 letters by using 'A' instead of the first 'The'). A 'perfect pangram' is a sentence containing each letter of the alphabet once only, i.e., just 26 letters. Besides offering miniscule testing efficiences, a 'perfect pangram' is mostly a curiosity and creative challenge for language enthusiasts, although no one seems yet to have devised a 'perfect pangram' which makes actual sense. Wikipedia's best example (2014) is 'Cwm fjord bank glyphs vext quiz' which definitely requires the translation: 'Carved symbols in a mountain hollow on the bank of an inlet irritated an eccentric person', ('cwm' being technically a borrowed word from Welsh meaning a steep valley). The best example of a 'perfect pangram' which contains abbreviated recognizable dictionary 'proper name' initials and other abbreviations is probably the: 'JFK got my VHS, PC and XLR web quiz'. Perfect pangrams which contain abbreviations and/or punctuation seem to attract less respect, however perhaps the shortest easily understood pangram is the impressive 29-letter: 'Bright vixens jump; dozy fowl quack', whose meaning is easily within the grasp of most children. 'Big fjords vex quick waltz nymph' is only 27 letters and maybe the best of the very short pangrams, but actually makes no sense at all. The 36-letter pangram 'Pack my red box with five dozen quality jugs' is a pleasingly sensible modern alternative to 'The quick brown fox..' The shorter but utterly idiotic 31-letter 'Jackdaws love my big sphinx of quartz', and 'Five quacking zephyrs jolt my wax bed' have been used by respectively by Microsoft and Apple operating systems in displaying fonts. Quite separately, many ordinary pangrams in non-English languages produce delightful translations into English (N.B. The non-English language versions are the pangrams, not the English translations given here), and prove that the pangram fascination is truly international, for example - 'A hiccoughing dragon spits at a driver who has reached someone else's campsite' (Bulgarian); 'Wrong practising of xylophone music bothers every larger dwarf' (German); 'A dust bat escaped through the air conditioner, which exploded due to the heat' (Hebrew); 'Lunch of water makes lopsided faces' (Italian); and the wonderful Polish perfect pangram: 'Go to the dungeon to batter the marital goose of doorframes', ('Pójdź w loch zbić małżeńską gęś futryn!'). I am open to all sorts of suggestions on this subject, especially an English perfect pangram which makes perfect sense... para- - a very popular and widely used prefix , meaning originally besides or next to, and especially nowadays 'analogous to' (the word it prefixes), in the sense that something is different to but similar to, like paramilitary or paramedic. From Greek para, meaning beside. paradox - a phrase, statement, or situation which contains seemingly irreconcilable or contradictory elements, and may actually be truthful or a fact, for example 'men and women can't live without each other, yet cannot live with each other', or 'people smoke tobacco in full knowledge that it is harming them', or 'a big fire burns out quicker than a little fire', or 'young men yearn to grow beards, but men grow to hate shaving'. The word paradox is Latin, originally referring in English (1500s) to a statement that opposed accepted opinion, from Greek paradoxon, contrary opinion, from para, distinct from, and doxa, opinion. paragraph - a connected and related series of sentences , traditionally signified by an indented first line and/or an enlarged/decorated first letter, and/or a numbered or bullet point , and a line-break at the end of the last sentence. Modern styling increasingly does not feature the first line indent. The term paragraph is often abbreviated by writers and editors, etc., to 'para'. A paragraph may contain just one sentence or very many sentences. This glossary contains entries which each may be termed a paragraph. The word paragraph is from Greek para, beside, and graphos , written/writing. paralipsis - a rhetorical technique whereby a (usually negative) feature is raised/exploited by stating that it is not being so exploited. For example, 'I would not stoop so low as to exploit his past infidelities..." It's the same as praeteritio . A common retort to a speaker obviously using paralipsis, i.e., making a point while denying that the point is being made, is to say, 'But you just did..' paronomasia - refers to the use or effect of a pun - where a double-meaning or 'double-entendre' of two same-spelling words or similar word sounds, produces amusing or clever or ironic effect. From 'para', Greek for 'besides', used to refer to something resembling another, or an alternative, and 'onomasia', meaning 'naming', in turn from 'onoma' meaning 'name'. See the puns and double-meanings collection . paronym/paranym - a word which in relation to another word is from the same word root, and which has similar or related meaning and also which usually sounds similar, or a word which is derived from a foreign word and which retains similar meaning, form and sound, for examples: kind and kindly; quiet and quiescent (both of which derive from Latin quies, meaning being still or quiet). Para is Greek for beside. passage - a short extract or section of words, spoken or in text form, typically anything in length from a single sentence upwards to a number of paragraphs. passive - in grammar, applying to a verb's diathesis / voice , passive (contrasting with its opposite ' active ') generally means that the subject experiences the action of the verb (by an object ) - for example, 'Dinner (object) was cooked (verb) by the chef (subject)' (passive voice/diathesis), rather than active voice/diathesis: 'The chef (subject) cooked (verb) dinner' (object), (active voice/diathesis). pathos - a sad quality of language, especially dramatic or poetic, typically intended by the writer/speaker to make the reader/audience feel pity, sympathy, emotional, weepy, upset, etc. From Greek, pathos, suffering. patronym - a name derived from a father or other male ancestor, from Greek pater, father. See also matronym . person - in the context of grammar and language 'person' refers to the classification/usage of pronouns, possessive determiners (who things/actions 'belong' to), and verb forms, according to whether they indicate the first person (speaker/writer, i.e., 'I', 'me', 'us') or second person (the 'addressee' or person being spoken/written to, i.e., 'you', singular or plural), or third person (the 'third party', i.e., 'he', 'she', 'it', 'they'). When we write/speak in the 'first person' we write/say '...I (or we) did or saw or gave or said, etc (this or that, whatever)', and we refer to 'me' and 'mine' or 'us' and 'ours'. When we write/speak in the 'second person' we write/say '...you did or saw or gave or said, etc (this, that, whatever)', and we refer to 'your' and 'yours'. When we write/speak in the 'third person' we write/say '...it was or is, etc', or 'he/she was or is, etc', or 'they were or are, etc'. The sense of 'person', and its effect on verbs, also extends to singularity and plurality , for example the differentiation between 'I' and 'we' (respectively first person singular and plural), and 'he/she/it' and 'they' (respectively third person singular and plural). In English the word 'you' acts as both second person singular and plural, although in many other languages these would be different words. phonation - the specific aspect of linguistics which is concerned with the way that sounds are 'voiced' using potentially extremely subtle control (or entailing involuntary effects) of airflow and shape/flexing of bodily tissue in the mouth area, notably vocal chords (vocal folds) and also (depending on precise and alternative definitions) the related vocal body-parts, so as to alter sounds of vowels, consonants and other vocal effects. Human beings have dramatically wide-ranging control over the way they 'voice' word-sounds, especially vowels, by controlling the vocal chords and larynx (voice-box), and generally phonation refers to the study of this and the bodily processes entailed. phoneme - any unit of sound in a language which enables word sounds - (that's sounds, not spellings) - to be differentiated, for example, simply the different letter sounds p and b (in differentiating pull and bull), and c, g and j (in differentiating cut, gut and jut). The subtleties of phonemic theory are not difficult to understand - they are simply the individual sounds which make words sound different - although the detailed explanation of these effects via text-based information is only possible using quite complex phonetic symbols. The word phoneme is French, from Greek phonema, meaning speech/sound. See also morpheme , which is a single indivisible unit of linguistic meaning or purpose. phonetics - the study/science of speech sounds. Phonetics particularly refers to very detailed sounds of words and syllables, letters, vowels, consonants, etc., and other smaller vocalized effects which together form words and connections between words. From Greek phone, meaning sound or voice. phonology - an aspect of linguistics which entails the organization, use, workings, etc., of sounds in languages. From Greek phone, meaning sound or voice. phrase - a somewhat vague and widely used term which refers to a short passage of words, typically between three and five or six words in length, or technically just one word upwards to (far more rarely, in theory) ten or a dozen words, provided that that the meaning is limited to a single concept or expression of some sort. A phrase is technically a single concept or notion: a brief instruction, exclamation, statement, or question, and very commonly part of a sentence . Phrases may be written or spoken, and feature fundamentally in every sort of word-based communication. If a passage of words can be split into more than one set of words which each carries an independent 'stand-alone' conceptual meaning, and especially if the passage is punctuated, then the combined passage is probably, theoretically, bigger than a phrase, which is usually called a sentence or a clause . This sentence is an example of a phrase. So is this one. Separated by this comma, this sentence contains two phrases. Less technically however many people would describe the previous sentence as a single phrase. The term is therefore potentially ambiguous when applied to short punctuated sentences. In common use the term phrase is frequently incorrectly applied to quite long passages or sentences, or even short paragraphs. So clarification is required where the use of the term 'phrase' has legal or other serious implications. A one word phrase is for example, 'Go' or 'Stop' or 'Why?', etc. A two-word phrase is for example, 'No smoking' or 'Keep calm' or 'Maybe tomorrow'. Technically, very long phrases are difficult to conceive, other than long lists of single items. The word phrase derives from Greek phrazein, to declare. See ' turn of phrase '. phrase book - a common term for a particularly light and selective type of foreign language translation dictionary, originally and specifically referring to a small or pocket volume containing only common words and phrases that are helpful for travellers/tourists, as distinct from a larger conventional translation dictionary for students of the language concerned. pilcrow - the typographical symbol ( ¶ ) for a paragraph, it is sometimes found in edited and published texts, although usually exists purely as a typographical marking, and also in computer code that is normally hidden, where usually it equates to a 'carriage return' (a typewriter action to begin a new line). The origins of the pilcrow symbol and name are subject to different opinions - possibly from French 'pelagraphe', paragraph, or more poetically, from 'pulled (plucked) crow'. The symbol seems to have evolved from a C with a slash through it denoting a chapter (Latin, capitulum), perhaps with other influences from old C and slash marks given in manuscripts by scribes a very long time ago. pitch - the quality of vocal sound according to wavelength, i.e., the extent of high or low note range in the sound of the voice. The term pitch has more recently developed also to mean directing a talk or presentation at a particular audience, as both a verb and noun, e.g., 'he pitched an idea' and a 'sales pitch'. Pitch may also refer to the nature or quality of style or attitude of a communication. placeholder name - a substitute word, (for example 'whatjamacallit', 'thingy', 'widget', 'thingamajig', 'oojamaflip', 'widget', 'gizmo', etc), commonly a 'nonsense' or childish word, for anything or anyone which for whatever reason is not or cannot be accurately named or remembered. The most popular examples according to Google 'hits' by the end of the first decade of the 2000s were: widget, hickey, gizmo, thingy, gimmick, thingie, jigger, gismo, gubbins, whatsit, thingamajig, doodad, whachamacallit, whatchamacallit, doohickey, thingo, thingamabob, thingummy, whatsis, dohickey, thingumajig, whatsaname, thingumabob, whachacallit, whatchacallit, thingmabob, dojigger, thingmajig, thingummyjig, kajigger, dooverlacky, doovalacky, doofer.. Technically the use of a placeholder name is metasyntactic, and a placeholder name is a metasyntactic variable, which is defined very well for linguistics in the terms usual computing field as: "...A conventional variable name used for an unspecified entity whose exact nature depends on context..." places of articulation - also called 'points of articulation' this technical linguistics term refers to the mouth-parts involved in articulation (the control of speech sounds, especially consonants, via airflow through points of articulation, i.e., mouth/vocal organs/parts by which sounds can be produced/altered). Linguistics theory generally lists about twenty places/points of articulation in and close to the human mouth, many of which involve the tongue position. Generally points 1-11 are considered passive (don't move much and are acted upon) whereas points 12-20 are active (mostly moving and acting on other parts). These are the typically stepped points although there is actually a continuum of infinite points between each of these main points, producing an infinite variety of sounds: - Exo-labial - upper lip - Endo-labial - upper lip - Dental - upper teeth - Alveolar - gum just behind teeth - Post-alveolar - ridge before roof - Pre-palatal - front of roof - Palatal - roof - Velar - back of roof - Uvular - hanging blob - Pharyngeal - top of throat (pharynx) - Glottal - windpipe entry (epiglottis) - Epiglottal - flap at tongue-base and larynx entry - Radical - tongue root - Postero-dorsal - front tongue body - Antero-dorsal - back tongue body - Laminal - tongue-blade - Apical - tongue tip - Sub-apical - under-tongue - Endo-labial - lower lip - Exo-labial - lower lip plagiarism - the act of copying someone's creative (usually written) work or idea and claiming it as your own, more commonly known as 'passing off'. Plagiarism is from Latin plagium, 'a kidnapping', in turn from the Greek word plagion for the same. See also copyright . plural - in language and grammar this contrasts with singular , and refers to there being more than one (typically person / noun / pronoun ) and the effect such plurailty has on verb forms, and to a far lesser extent in English on adjectives, although in other languages many or all adjectives vary according to singularity or plurality. poly- - a widely occurring prefix , meaning many or much, from Greek polus, much, and polloi, many. polysemy - the existence of many possible meanings for the same word or phrase (from Greek poly, many, and sema, sign). polysyllabic - this refers to a word of more than two syllables, from Greek poly, many. portmanteau/portmanteau word - a word made from combining two words whose combination refers to the sense or meaning of the new word - for example smog (from smoke and fog), muppet (marionette and puppet), and brunch (from breakfast and lunch). There are hundreds more examples, many of them very clever and amusing. The word portmanteau is French and is a metaphorical reference to a 'portmanteau' double sectioned case for carrying a cloak, from the separate French words porter (to carry) and manteau (cloak) - see portmanteau in the cliches origins listing for more details of origin and examples. praeteritio - drawing attention to something by saying that you will not mention/exploit/be influenced by it, for example "...let us ignore the fact that he spent time in prison..." or "...he is unsuitable for the post for many reasons aside from considering his earlier bankruptcy..'. Praeteritio (pronounced 'praterishio') is speech-writing/speaking technique, typically used cynically and negatively, sometimes humorously, for a critical purpose against a political or business opponent (individual/group/oganization). In political situations praeteritio can be a very subtle method of inferring inferiority or incompetence in a competitor, and at the same time implying negative conduct among other competitors, for example, '...while other refer at length to his criminal past, I say his lack of experience and qualification alone render him the wrong person for the job...' The idiomatic '...not to mention...' is technically an introduction in a praeteritious comment, although the expression is not generally regarded as such in common speech. Praeteritio may also be used for positive aims, for example, '...I am not claiming to be the best candidate by virtue of my previous highly successful record - please forget this; I am the best candidate because I have proven credentials, the best team, and our plans have the most popular support..." Praeteritio has many equivalent terms: paralipsis/paralepsis , preterition, cataphasis, antiphrasis, and parasiopesis. Paralipsis is probably the most common of alternative term. predicate - the part of a phrase or sentence which contains a verb and some information about the subject . preposition - prepositions are connecting positioning/relationship words like: in, on, of, to, with, under, etc. A preposition expresses a relationship between two other words or concepts, typically (but not always) appearing before a noun or pronoun object so as to position a preceding subject noun or pronoun and its action (verb) in relation to the subject noun concerned, for example 'the cat sat on the mat', ('on' is the preposition), or 'she climbed down the ladder', ('down' is the proposition), or 'she bought it for me', ('for' is the preposition). Prepositions do not necessarily appear between subject and object, for example in the phrases 'the world (object) we (subject) live (verb) in (preposition)', or 'in (preposition) which world (object) we (subject) live (verb)'. Historically conventional English rules asserted that a sentence should not end with a preposition, for example, 'What did you go there for?', although nowadays this is not generally thought to be incorrect grammar. Examples of prepositions are: to, on, over, of, out, for, upon, in, with, against, up, under, between, etc. The word derives from its logical meaning, i.e. pre, before, and position, to place. - A preposition curiosity: Can you think of a proper meaningful sentence that finishes with seven consecutive prepositions?... Firstly the scene-setter: A mother goes downstairs to find a book for her son's bedtime story. When she returns with a book about Australia, her son says, "Why did you get a book to read out of about down under up for?" (In this context 'down under' is technically a noun, but it's still a clever and amusing word puzzle.) prefix - a word-part that has been/is added to the front of a word or word stem, such as 'pre' (meaning before, as in prefix and prequalify), and 'mis' (meaning wrongly, such as misbehave, mistake, etc) and 'anti' (meaning against, as in antifreeze, or antidisestablishmentarianism), and 'homo' (meaning same, as in homogeneous, homosexual, although confusingly 'Homo Sapien' is Latin, meaning literally 'man wise'). See also suffix , which is a word-ending. In recent years the prefixes 'i' and 'e' have become very widely seen prefixes in referring to 'internet' and 'electronic', for example the Apple brands iPhone, iTunes, etc., and the generic terms e-book, and email. Understanding prefixes is helpful for interpreting the meaning of new words. For example see poly- , and hyper-/hypo- . pronoun - a word which acts instead of a noun - for example, you, me, it, this, that, etc. From Latin pro, 'for, on behalf of', and noun. proper noun - a name (i.e., noun ) for a particular person or place or other entity, such as a brandname or corporation, which usually warrants a capitalized first letter, for example, Rome, Caesar, Jesus, Scrabble, Texaco, etc. proto- - a prefix meaning first, as in prototype, from Greek protos, first. pseudepigrapha/pseudepigraph - literary or written works which claim to have been created by a notable author, but which are basically fake, much like an artwork painted in the style of a famous artist including a forged signature. pseudo- a prefix , referring to a false or artificial version of something, from Greek pseudes, false. The pseudo prefix is commonly added to all sorts of terms to refer to a fake or imitation, especially something normally quite serious and well-qualified, for example, pseudo-science, or pseudo-intellectual. pseudonym - an alternative name for a person or group, thing, etc., adopted usually to avoid using/revealing the true name and for marketing/image purposes, or given by others for various reasons because the pseudonym name is considered more appropriate, or simply that it is easier to pronounce and remember, or translates better internationally. Pseudonyms are most commonly associated with authors/writers (for which they are called pen names), but pseudonyms can instead be stage names or screen names (of actors), aliases (also expressed as 'aka' = 'also known as' - often associated with criminals), nicknames (particularly that are widely used and recognized), usernames, names of titled people or officials, monarchs, and popes, etc. Examples of pseudonyms are: John le Carré, George Orwell, Joseph Conrad, Lewis Carroll, Mark Twain, Pope Francis I, C S Forester, John Wayne, Marilyn Monroe, Ellery Queen (actually two authors using a single pseudonym), Elizabeth R, Pelé, George Eliot (actually a woman using a male pseudonym), Scary Spice, Ayn Rand, etc. There are thousands of them. A true name is called a orthonym . Pseudonym is from Greek pseudes, meaning false. pun - also called paronomasia , a pun refers to a double-meaning, where a word is used instead of another more obviously contextual word which has very similar or the same sound, and may or may not have different spelling, and which has different yet related meaning. The famous quote 'Time flies like and arrow; fruit flies like a banana' features the pun on the word 'flies'. The quote 'A broken window is a pain' features the pun of 'pain' with window 'pane'. Puns may also feature more than one word as the substitute and/or substituted words, for example 'If a leopard could cook would he ever change his pots?' where 'his pots' is punned with 'his spots'. Puns may also entail phrases too, for example 'Cadaver industry regulation - bodies are weak and lack teeth' where 'bodies are weak and lack teeth' refers both to decaying corpses and also to regulatory bodies lacking power and authority. For more examples see the puns and double-meanings collection . punctuation - marks in writing, such as commas, full-stops (periods), question marks, etc., which indicate separations, pauses, emphasis, status, mood, ownership, etc., and which overall guide the reader/speaker as to flow, meaning, context, etc., of the text concerned. Punctuation differs from diacritical marks , which indicate letter/word-sound pronunciation. Here are the main examples of punctuation and some other marks which have a punctuating or similar effect in language: |full-stop/period||.||Ends a sentence, a significant pause before resuming next sentence .| |comma||,||Ends a phrase, slight pause, connects phrases or listed items.| |semicolon||;||Ends a phrase, a longer pause than a comma, shorter than a period.| |colon||:||Prefaces a list or example or quote or other referenced item, with a pause equating to a semi-colon.| |question mark||?||Prompts or demands an answer or consideration at the end of a phrase.| |exclamation mark||!||Adds emphasis at the end of a phrase. Denotes loud speech or surprise or indignation.| |hyphen/dash||- or —||Connects hyphenated words or prefixes or suffixes ; an alternative to brackets surrounding a phrase; an alternative to a comma or semicolon; and alternative to the word 'to' in dates and times, etc.| |apostrophe||' or ’||Denotes ownership, missing letters, or alternative to speech marks. Slanted style is traditional and older.| |speech/quotation marks||" " or “ ”||Surround and denote speech or quote or extracted content. Slanted style is older traditional design, sometimes called 66 99, the designs are respectively called 'open quotes' and 'close quotes'.| |paragraph||line-break and indent||Not a punctuation symbol, but still punctuation, for breaking separate passages, a longer pause than a period. The first line of the new paragraph is usually indented.| |brackets||( ) [ ]||Surround and denote relevant or helpful supplementary or incidental information, which is usually not crucial to main point.| |ditto mark||" or - " -||Appears in columns and lists signifying ditto, i.e., 'same as above'.| |slash/virgule||/||Alternative for 'or'; alternative for 'and' (in a combined sense); denotes abbreviation of a two-letter term (e.g., w/e for weekend or week ending); internet address file/directory separator; indicator of line-break in typographical mark-up instruction/notes; signifies 'divided by' in mathematics; and various others. Also called solidus, stroke, forward slash and more - it's a very useful and powerful symbol.| |backslash||\||Far less common in typography and writing, but increasingly common in computerized communications, notably in file and directory separators.| |underline/underscore||_ or ___||Adds emphasis to underlined passage. Single underscore symbol is used as alternative to hyphen to make continuous unbroken filenames and other electronic data.| |asterisk(s)||* or **||Indicates that a related note appears later in text, which is also marked by an asterisk. Where the technique is soon repeated two asterisks are used, and so on, to avoid confusion. Asterisks are also used as replacement letters in offensive words by some publications.| |guillemets/angle quotes/French quotes||« »||Surround and denote speech or quote in some non-English foreign languages, as alternative speech marks. Named after french printer Guillaume Le Bé (1525-98).| reduplication - in language, reduplication refers to the repeating of a syllable or sound, or a similar sound, to produce a word or phrase. For example, mumbo-jumbo, higgledy-piggledy, helter-skelter, reet-petite, easy-peasy, maybe-baby, bananarama, tuti-fruiti, see-saw, curly-wurly, scooby-doo, looby-loo, hurly-burly, pac-a-mac, touchy-feely, in it to win it, etc. Unavoidably all examples of reduplication are also examples of alliteration , although many examples of alliteration are not reduplication. Reduplication generally entails the repeating of larger word-sections than alliteration. rhetoric - writing or speech for persuasive or impactful effect. Typical users of rhetoric are salespeople, politicians, leaders, teachers, etc. The term 'rhetorical question' means a question designed to produce an effect - typically to make a statement or point - rather than seeking an answer or information. The word is from ancient Greek, rhetor, an orator or teacher of persuasive effective speaking. rights-holder - the owner of legal rights (i.e., control, usually by virtue of creation and/or ownership) such as copyright or other intellectual property . rubric - a document heading or a set of instructions or rules, or a statement of purpose. Rubric generally refers to headings/rules contained in formal documents, for example in examination papers, or processes stipulated by an authority of some sort, for example the instructions on a parking penalty ticket, or on licensing applications. The origins of the word are fascinating, from Roman Latin in which 'rubeus' meant red, and 'rubrica terra' referred to the 'red earth' and its derivative material used to make an early form of ink. Roman practice was to use red ink for laws and rules, which established the association between red 'rubrica' ink and formal written instructions. sarcasm - cynical or sceptical understatement (including litotes ), overstatement, statement of the obvious, exaggeration, or irony used for negative effect, for example to mock, criticize, ridicule, patronize, insult, or make fun of someone or something. Sarcasm may be characterized by the tone of voice more than the words themselves. Context is genarally crucial to appreciate sarcasm. semantic/semantics - semantic refers to the meaning of language, or less typically the meaning of logic. The word is commonly used to clarify that a disagreement might be semantic, or a matter of semantics (interpretation of the meaning of words used to frame the argument), rather than a true disagreement about the matter itself. For example it can be difficult to agree training methods with another person, until semantic agreement is first established about the word 'training', i.e., whether 'training' refers to skills, knowledge, attitude, etc. Semiotics/semiology - Semiotics is the study of how meaning is conveyed through language and non-language signage such as symbols, stories, and anything else that conveys a meaning that can be understood by people. Semiotics relates to linguistics (language structure and meaning), and more broadly encompasses linguistics and all other signage, metaphor and symbolism. The processing aspect of semiotics is called semiosis. Semiotics features strongly in the form of Stimulus Response Compatibility in Nudge theory . Within semiotics, the arrangement of words is called syntax, and its study/science is called syntactics. Semiotics contain logic , and anthropological factors [humankind], i.e., effects are based on unchanging logic (for example big is generally more impactful than small), and also based on human factors such as genetics, evolution, culture, and conditioning. sentence - a sentence is usually a string of words which contains (as a minimum) a complete and grammatically correct statement, question, command, etc., typically including a predicate and subject , for example (and a very short one): "I ate." (In this extremely short example, 'I' is the subject, and 'ate' informs the reader/listener about the subject. Technically, depending on context, a single word may be considered to be a sentence, for example: "Why?" and "Yes." These single words can be described as sentences because they stand alone as complete and grammatically correct statements. A longer example of a sentence, entailing lots of punctuation , is: "We ate a meal at a restaurant, of fish landed in the local port, and vegetables grown in the restaurant garden - all washed down by wine produced in a nearby vineyard; made especially memorable by the wonderful music, hospitaility, and attention of our hosts." singular - in language and grammar this contrasts with plural , and refers to there being only one (typically person / noun / pronoun ) and the effect such singularity has on verb forms, and to a far lesser extent in English on adjectives, although in other languages many or all adjectives vary according to singularity or plurality. simile - a descriptive technique in writing, speaking, communicating, etc., by which something is compared symbolically to something else of more dramatic effect or imagery, for example, 'cold as ice', 'quiet as a mouse', 'tough as old boots', etc. The word 'as' is common in similes, or often a simile is constructed using the word 'like', for example, 'the snow fell like tiny silver stars', or 'he ordered food from the menu like he had not eaten for a month'. A simile is similar to a metaphor , except that a simile uses a word such as 'as' or 'like' so as to make it a comparison, albeit potentially highly exaggerated, whereas a metaphor is a literal statement which cannot possibly be true. 'He fought like a lion' is a simile, whereas 'He was a lion fighting' is a metaphor. The word simile is from Latin similis, like. slang - informal language, typically understood by a group of people and not necessarily understood well or at all by others outside of the group, primarily used in speech; far less commonly written. Examples are individual slang words, and entire 'coded' languages, such as backslang and cockney rhyming slang . sheva/shva - a phonetically neutral short vowel sound, for example at the end of the word 'sofa' - rather like a very short 'eh' or 'ah' - this is the same as a schwa or sh'wa - all are originally from the Hebrew language. snake_case - compound words joined by underscores, which has become popular in computer text due to the benefits of avoiding gaps in filenames, domain names and URLs (website/webpage addresses), etc. See also CamelCase - no spaces, differentiation via capitals - camel alludes to humpy wordshapes. spoonerism - an accidental or intended inversion or exchange of word sounds between two words which produces two new words which may or may not be intelligible, and which is usually thought amusing. A long-standing example is that of "...a cat popping on its draws..." (instead of 'dropping on its paws'). The effect is named after Reverend William Archibald Spooner (1844-1930), a warden of New College, Oxford, who has long been said prone to the error. A spoonerism is apparently also known (very rarely) as a marrowsky, supposedly after a Polish count, reputed to be similarly afflicted. See more detail of origins and examples of funny spoonerisms in the cliches and word origins listing. stem - the stem of word - a 'word-stem' - is the main part or root of a word to which other parts such as a prefix and/or suffix are added. For an extreme example, the stem of the word 'antidisestablishmentarianism' is 'establish'. stress - in detailed linguistics, and especially phonetics , stress equates to the emphasis given to a syllable or syllables or other speech sounds within a word or words to determine or alter pronunciation, or control other audible effect of a word. Separately and more generally, stress in language has an additional meaning, referring to placing emphasis on a particular word or phrase, as would be shown by emboldening or capitalizing the stressed sections of a passage of text. subject - in grammar a subject is a noun or pronoun which governs (does something to or in relation to) an object in a sentence, for example, 'the lion (subject) chased (verb) the zebra (object)', or 'we (subject) crossed (verb) over (preposition) the road (object)'. suffix - a word-ending, which may have a word-meaning in its own right, but more commonly does not, and is commonly from Latin or Greek, and acts as a combination-part in building words and their meaning. There are many thousands of examples of suffixes, and almost unavoidably virtually any word of more than one syllable contains a suffix, and very many words of a single syllable contain a suffix too. Many suffixes alter the sense or tense of a word, for example, the simple 's' suffix is used in English to denote plural. The 'x' suffix denotes a plural in many French-English words. The ' ness ' suffix (origin old Germanic) refers to the state or a measure of a (typically adjective) term enabling it to be expressed as a feature or characteristic, for example, boldness, happiness, rudeness, etc. The suffix tomy refers to many surgical processes. The suffix 'ation' is very common - it turns a verb into a noun, (for example examination, explanation, and the recently popular among financial markets commentators, 'perturbation'). The ' age ' suffix is another which develops a word to express a measurable degree. Not surprisingly the suffix ' onym ' features perhaps more commonly in this glossary than you will ever encounter it elsewhere, because it means a type of name, and specifically a word which has a relationship to another. Very many words, formed as combinations or contractions of two words, entail the use of the first word as a prefix , and the second word as a suffix, for example obvious combination words such as breakfast, cupboard, forehead, railway, television, aeroplane, saucepan, etc., and less obvious combination words like window , and many thousands more. See also prefix , which is a morpheme or larger word-part acting as a word-beginning. syllable - a single unit of pronunciation typically comprising a vowel sound without or with one or two consonants - perhaps best illustrated by examples of single-syllable words: and, to, in, of, we, us, but, grab, grabbed, yacht, reach, reached, strings, etc., and two-syllable words such as: baby, table, angry, frightened, tangled, enraged, etc., and three-syllable words such as: holiday, enemy, ebony. As you can see the number of letters and word-parts ( morphemes ) does not determine the number of syllables. For example the word 'antidisestablishmentarianism' has eleven syllables and only 28 letters. The following words each have ten letters yet only one syllable: scraunched (the sound of walking on gravel); schmaltzed (imparted sentimentality); scroonched (squeezed), schrootched (crouched), and strengthed (an old variant of strengthened). The word syllable is from Greek sullabe, from sun, together, and lambanein, take. syllogism - a proposition in which a conclusion or 'fact' is inferred from two or more related 'facts'. For example: Big cats are dangerous; a lion is a big cat; (therefore) lions are dangerous. Or: Diamonds are precious gems; precious gems are sometimes stolen; (therefore) diamonds are sometimes stolen. A syllogism may comprise more than two 'facts' which together support the conclusion, for example: A mouse is bigger than a fly; a cat is bigger than a mouse; a horse is bigger than a cat; an elephant is bigger than a horse; (therefore) an elephant is bigger than a fly (and so is a horse and a cat). synonym - a word or phrase which means the same as or equates to another, for example, high and tall, or round and circular, or a word or phrase which is used to represent, characterize, or allude to another, for example, 'the swinging 60s' synonymously refers to the optimism and liberated lifestyle of that time, and the term 'nuts and bolts' is used a synonym for technical details of a project or plan (from Greek sunonumon, from sun, with and onuma, name). See also antonym , a word which means the opposite of another. syntactics - the study/science of the arrangement of words within language, and especially within sentences which seek to convey clear meaning. The arrangement of words is called syntax , which is the root word of syntactics. syntax - syntax refers technically to how words and phrases are structured to form sentences and statements, and more generally to the study of language structure. The word is very logically derived from from Greek, suntaksis, from sun, together, taksis, arrangement, from tasso, I arrange. synecdoche - a word or possibly short phrase which refers to a people or things in a figurative sense, based on a significant component or effect found in the thing it represents, for example referring to sailors as 'hands', or cowboys as 'guns', or group members as 'heads, or lookouts as 'eyes and ears'. tautology - this has two main meanings - first and simplest, (sometimes called the semantic meaning) a tautology is a statement in which a point or description is repeated using different wording, usually considered grammatically incorrect (not factually incorrect), or at best clumsy and an inefficient use of language, for example: "They arrived together at the same time...", or "An empty void...", or the very common, "At this moment in time..", or "The incredible achievement defied belief...", or "The eggs and milk were combined together..." . Usually the words 'and' and 'also' next to each other in a statement produce a very simple tautology (because 'also' and 'and' mean the same and so together represent an unnecessary repeat of the same thing). Where the repeat (tautology) is for stylistic or dramatic effect, for example: "The last, final breath...", the tautology is more acceptable and may not be considered poor grammar. A tautology used for dramatic effect is similar to hendiadys . Second, (in a more theoretical or scientific context, sometimes called the logical or rhetorical tautology) a tautology is a lot more complex and potentially so difficult to explain that people may resort to using algebraic equations. A simple example is a statement containing a claim whose validity is dependent on repeating the same point within the statement, or expressed another way, is a statement which is valid by virtue of the claims or assumptions within it, for example, "Civilizations have always sought to gather and protect gold because it is so valuable and desirable...". (We can neither argue with this, nor prove it beyond the limits of its own assumptions.) There are more complex mathematical and scientific interpretations of a tautology than cannot be explained here in this glossary, because this glossary is mainly concerned with grammar and day-to-day communications rather than scientific applications - and also because the complicated interpretations completely baffle me, as well as most other people aside from mathematicians). Whatever, tautologies at a simple level are particularly fascinating because they are used (and accepted without question by most audiences) extremely frequently in political statements and media commentaries. Tautologies are commonly used to persuade others by weight of argument, rather than substance. Perhaps the biggest example of a persuasive tautology, even at the very highest level of leadership and government is, "Our decisions and actions were correct because it was the right thing to do... ". Next time you hear this you will recognize it as a tautology, and if you hear it appended with the qualifying "...and God will be my judge...", then be very worried indeed; the speaker is simply saying: "I'm right because I say I am." tautonym - originally this meant and still mainly refers to a biological taxonomical name in which the same word is used for the genus and species, for example Vulpes vulpes, (the red fox). In language/linguistics a tautonym generally and informally refers to a reduplicative word, containing two identical parts, or such as bye-bye, or bon-bon. taxonomy - a structural organization of classifications, almost always hierarchical, like a family tree, with levels of categories/classes, each comprising sub-sets, in turn comprising sub-sets. The concept of taxonomies primarily developed in biology but now can be found in classifications of virtually anything, for example Bloom's Taxonomy of Learning Domains . tense - in grammar the term 'tense' refers to the form of a verb which indicates when in time the action happened, or an aspect of the continuity/completion of the act, in relation to the action itself and also the time at which the action/happening is spoken or written about. The three main common tenses are: past tense ('I went'), present tense ('I go') and future tense ('I will go'). Some tenses are extremely complex, for example: 'I was to have been going'. Answers on a postcard please as to what that tense might be. the - the word 'the' is technically/grammatically 'the definite article', for example 'The bird fell out of the sky', or 'The muddy children need bathing'. It's called 'the definite article' because it specifies a definite thing/person, that is known or can be identified from the context. This is different to 'the indefinite article' (a or an), which makes a non-specific or general reference to something. -tomy - tomy is a common suffix , occasionally seen in language terminology (e.g., dichotomy ), where it alludes to a process or situation requiring resolution, although the tomy suffix is far more often seen in medical procedure terminology (vasectomy, lobotomy, etc); it's from Greek tommia, cutting. tone - in language tone refers generally to the quality of the voice and vocal sounds in terms of pitch , strength, and other qualities of sound and style or mood, for example 'an angry tone of voice' or 'a harsh tone of voice' or 'he spoke in hushed tones'. Tone of language may refer to qualities of sound, feeling, attitude, volume, pace, and virtually any other quality that might be imagined for verbal, or indeed written or printed communications too. Broadly when referring to communications, tone equates to the nature or type or description of the language and how the meaning is conveyed. trademark - a registered and protected name (or logo) of a product, brand or organization, usually signified by the TM abbreviation. The trademark word/concept is not technically a grammatical or linguistics term but trademarks are often very significant in language and language development, notably when a trademark becomes 'genericized'. A generic trademark, also known as a genericized trademark or proprietary eponym, is a trademark or brand name that has become the generic name for, or synonymous with, a general class of product or service, against the usual intentions of the trademark's holder. Using a genericized trademark to refer to the general form of what that trademark represents is a form of metonymy. trichotomy - a three-part classification, notably found in the form of rules, laws, models, processes, etc. For example ; the Parent/Adult/Child in Transactional Analysis ; the Visual/Audio/Kinaesthetic in the VAK Learning model ; and the traditional concept of communicating Features/Advantages/Benefits in selling and sales training . There are several thousand other trichotomous rules, laws, principles, etc., and they are found in any discipline or subject that you can imagine. triphthong - a monosyllabic vowel sound (not a single vowel) which effectively contains or moves through three different discernible vowel sound qualities. It's from Greek 'triphthongos', meaning 'with three sounds/tones'. See also diphthong , which generally refers to there being two different sounds in one vowel-sound syllable. Monophthong refers to a single pure vowel syllable sound. trisyllable - a word or (technically in poetry) a line of poetry containing three syllables. trope - a trope is a word or phrase that is substituted metaphorically or symbolically to create an expression of some sort. For example, the expression 'Earn a crust' uses the word 'crust' as a trope. The expression 'It's raining cats and dogs' uses the phrase 'cats and dogs' as a trope. To say that someone has a 'razor wit' uses the word 'razor' as a trope. From Greek, tropos, meaning turn or way. turn of phrase - an old expression referring to a particular way of using (usually spoken) language which is quirky, coarse, amusing, clever, or otherwise unusual. The term is generally applied to a known/named person; far less commonly to a group. Often the term is used euphemistically and ironically , for instance in referring to a person's use of rude, ' non-pc ', or offensive words, for example, "He has an interesting turn of phrase". The term may also be used literally, for example, "She has an sharp/clever/amusing turn of phrase," when referring to someone whose speech/writing includes such a quality. typo - a slang abbreviation derived from the full meaning 'typographical error/mistake', used by writers, publishers and printers, originally referring to a mistake (typically spelling or punctuation) in the typesetting stage of publishing, as distinct from a writer's error of fact/spelling. The slang term is nowadays used more widely in referring to a 'keyboard' mistake by writers of all sorts, and by agencies involved in printing and media, as distinct from an error due to a writer's poor spelling or inaccurate facts. Originally the process of publishing involved clearly separated stages of writing/origination, then typesetting (at which printing plates were made), then printing. Sometimes errors of interpretation or inaccuracy occurred at the typesetting stage, which might or might not be noticed before printing. Such errors were called typos, and the term has survived and thrived into modern times. The technological development of publishing now enables writers and editors to control final output far more reliably and directly, so the 'typo' expression now mostly refers simply to a writer's keyboard error. typographics/typography - the study or art of designing and producing letters and other symbols ( glyphs ) used in printing and other textual reproduction, excluding handwriting. The word 'type' refers to the traditional lead letter-blocks used in traditional typesetting and printing. The word typographics derives from Greek type, meaning form, and graphos , writing. typeface - an old traditional word for what is nowadays called a font , or more technically and traditionally a font family. Historically a typeface referred more to a font family, comprising slightly varying styles of lettering and other glyphs all based around a main design. verb - traditionally children are taught that a verb is 'a doing word', which is a good definition. We might extend it to 'a doing or happening word'. More technically a verb is the 'predicate' (this describes what is happening to the subject) in a phrase or sentence. Most statements comprise as a minium: a subject (which is doing something, often acting on or affecting or experiencing the effect of an object), an object (something which is being acted upon or affected by or affecting a subject), and a verb (which describes the action or affect). For example: The cat ( subject ) sat (verb) on the mat ( object ). It is very difficult to compose a meaningful sentence without a verb. Some of the shortest sentences contain just a subject and a verb, for example: 'He wept'. 'He' is the subject, 'wept' is the verb, and there is no object. The sentence 'It rained' contains the subject 'it' and a verb 'rained' ('it' is a pronoun and technically a substitute for something implied such as 'the weather' or 'at that time' or 'at that location'). The sentence 'I was happy' contains 'I' (subject), 'was' (verb) and 'happy' ( adjective describing the subject). The sentence 'I ran quickly' contains 'I' (subject), 'ran' (verb), and 'quickly' ( adverb describing the verb). The word 'verb' is Latin, from 'verbum', meaning 'verb', and originally 'word'. A significant aspect of a verb in use is its ' voice ' or diathesis , which refers to whether the verb is acting actively (the subject is doing something to the object ) or passively (the object is having something done to it by the subject). verbal - the word verbal mainly means 'consisting of words' but commonly particularly refers to spoken words, such as a 'verbal warning' (as distinct from a written one). Technically verbal may also refer to something related to a verb, such as verbal meaning or verbal application (for example of a word which could be regarded as a noun or other form of grammar, such as 'The word plant may be used in a verbal sense, as well as referring to flower, which is a noun'). verbatim - an English term from Latin, meaning 'word for word', used when referring to quoting or recounting previous communications of some sort. It's from Latin verbum, meaning word. verb phrase - there are several slightly different complex technical explanations for this, so it's easier to consider the definition as all the parts of a (subject-verb-object) statement without the subject, for example, in the statement 'Peter went to the office', the verb phrase is 'went to the office'. In the statement 'The children played noisily in the garden', the verb phrase is 'played noisily in the garden'. The Oxford English Dictionary defines a verb phrase as: '...a verb with another word or words indicating the verb's tense , mood or person (tense being past, present, future, etc; mood relating to modality , being the speaker's/writer's sense of certainty, possibility, necessity, etc; and person referring to first, second or third, as in I, you, he, etc.) vernacular - the language and/or dialect of the ordinary people of a particular region or area, or the language of a group of people formed around a purpose or discipline or other interest. Vernacular may refer to sounds ( accents ) and/or to words and/or the construction of language, spoken or written. Vernacular may also refer to one's native or mother tongue. Vernacular is a noun, although it seems like an adjective. The word derives from Latin vernaculus, 'native' or 'domestic', interestingly ultimately from verna, a 'home-born slave'. voice - also called diathesis - in English grammar this refers to whether a verb, including its related construction, is active or passive; for example 'the teacher taught the class' is an active voice/diathesis, whereas 'the class was taught by the teacher' is a passive voice/diathesis. Some other languages offer a 'middle voice' which is neither active nor passive. In communicating sensitively it is often helpful to consider whether active or passive voice is best for the situation, considering also the verb and context. Commonly passive voice/diathesis of verb constructions are less likely to offend or unsettle people, however for certain verbs/situations the opposite may be true. See diathesis and active and passive for more detailed explanation and examples. vowel - a letter or speech sound in language produced by an open vocal tract, involving little or no friction or restriction of the sound through the mouth or airway. Speech basically comprises vowels and consonants , consonants being letters/sounds involving restriction or friction of sound. Vowels generally form the basis or core of syllable. Vowels in English are commonly regarded as the letters a e i o u, although many more sounds are also vowels, such as those made by the letters ee, oo, oy, y (as an 'ee' or 'i' sound), etc. Definition of 'vowel' therefore varies. The letters a e i o u are generally considered to be the pure vowels, in terms of differentiating vowels from consonants in the English alphabet, although beyond this narrow context 'y' is certainly be regarded as a vowel sound represented by a single letter. vowel shift - a change in the sound of vowel pronunciation, typically when describing language of a group and its change over time, for example the 'Great Vowel Shift' which introduced longer vowel sounds to the modern age, shifting the style from the shorter vowel sounds of the middle ages. We might also refer to vowel shift in the context of a change in dialect when someone lives for a while in a different region with different vowel sounds in local language. vox - Latin for voice, appearing in English notably in the expression 'vox pop'. vox pop/vox populi - 'vox pop' means popular opinion, from 1500s Latin 'vox populi' (voice of the people), typically gleaned from and referring specifically to quick street interviews by radio/TV broadcasters of members of the public, termed in the media as a 'man on the street interview', often pluralized to 'vox pops'. Cynics might reasonably suggest that substantial and increasingly large proportions of 'news' and 'current afairs' broadcasting comprise completely meaningless and thoughtless vox pops, presented as if it were all objective and wise comment on the subject concerned. word - a single unit of speech or writing. Beyond this simple definition, the word 'word' is a fascinating concept to define, and is open to considerable debate. The modern Oxford English Dictionary gives these two basic definitions for the essential grammatical meaning of 'word': "... a single distinct meaningful element of speech or writing, used with others (or sometimes alone) to form a sentence and typically shown with space on either side when written or printed." [or separately] "...a single distinct conceptual unit of language, comprising inflected and variant forms." There are other official dictionary definitions of the word 'word' when used in different contexts, for example in usage such as: 'word on the street' (in which 'word' refers to gossip and discussion, etc); 'don't believe a word of it' (in which 'a word' refers to all discussion including the smallest element such as a single letter or number); 'give me your word' (in which word equates to a promise or agreement); 'just say the word' (in which word means go-ahead or permission or command); and verb forms such as in 'the best way to word a letter' (in which word means write or style). Traditionally printed book dictionaries were considered the arbiters of words, so that only 'words' which were listed and defined in printed book dictionaries were 'proper words'. In more enlightened times however dictionaries have increasingly become regarded as records and collections of words which are in popular use in day-to-day conversation and various writing by people - despite what dictionaries contain. This is to say that words change and evolve and appear in actual real language far sooner than they do in dictionaries. Dictionaries of course record and organize words that are in use, but they do not dictate or design new words. Ordinary people do this. zeugma - where a word applies to two different things in the same sentence, typically with confusing, incongruous or amusing effect. Lord Byron is noted for his amusing use of zeugma, for example the wonderful line in his epic poem Don Juan, "Seville is a pleasant city, famous for oranges and women..."
Presentation on theme: "A Mathematical Model of Motion"— Presentation transcript: 1 A Mathematical Model of Motion Chapter 5A Mathematical Model of Motion 2 5.1 Graphing Motion in One Dimension Position-Time graphsThe position x, is plotted against time t on a coordinate system.How long is an object at a given location?If the time was any finite amount, the object would be at the same position during that time and would therefore have no movement!An instant of time must therefore be zero. 3 Using a Graph to Find Out Where and When At what time was the object at ……….What was the position of the object at ……..Graphing two or more objectsUse pictorial and graphical representations to determine when the two objects are at the same position.From Graphs to Words and Back AgainUse your knowledge of motion graphs to determine what the object is doing. 4 Using an equation to find out where and when Uniform motionEqual displacements occur during successive time intervals.Recall slope = rise/runUsing an equation to find out where and whenv = ∆d/ ∆t = df-di / tf-tid = do + vt 6 5.2 Graphing Velocity in One Dimension Determining Instantaneous VelocityRecall v = ∆d / ∆tThe slope of the curve at a given time will determine the velocity at that point.Use the tangent to the curve for changing velocities. (DEMO) 7 Displacement From a Velocity-Time Graph Velocity-Time graphsHaving the velocity at any given time allows us to graph the velocity of any object versus time.Displacement From a Velocity-Time GraphDisplacement is found from a velocity time graph by taking the area under the curve.Vt 9 5.3 Acceleration Determining Average Acceleration a = ∆v / ∆t Recall acceleration is a change in velocity 10 Constant and Instantaneous Acceleration Determined by taking the slope of a velocity – time graph.If the graph is not linear, you can take the slope of the tangent line to determine the instantaneous acceleration. 11 Positive and Negative Acceleration The sign of the acceleration is determined by taking the sign of the slope of the velocity – time graph. (vf – vi)Acceleration when instantaneous velocity is zero? 12 Calculating Velocity from Acceleration a = ∆v / ∆tv = vo + at 13 Displacement Under Constant Acceleration Use the area under the curve of the velocity – time graph to find the displacement when the velocity is constant.vt 14 The total displacement is just the sum of the rectangle and the triangle. d = vot + ½ (v-vo)tif the initial position is not zerod = do + ½ (v+vo)tThis equation gives us the final position with a constant acceleration. 15 If the velocity and time are known the equation can be rearranged as d = do + vot +1/2at2If the time is unknown but the velocity, distance and acceleration need to be related, use this equation:v2 = vo2 + 2a(d-do) 17 5.4 Free Fall Acceleration Due to Gravity No matter what an object is made of or its mass, all objects fall at the same rate when friction is ignored. This is called the acceleration due to gravity (g) and its value is m/s2.This acceleration is always downward and may be negative depending on the coordinate system.In formal labs use 9.78 m/s2. 18 PSS Sketch the problem. Draw a vector diagram showing the motions. List all the variables and known and unknown values.Use the chart to select an equation to relate the variables.Solve the problem.Check your answer and units.
Isotopes of uranium Uranium (92U) is a naturally occurring radioactive element that has no stable isotope. It has two primordial isotopes, (uranium-238 and uranium-235), that have long half-lives and are found in appreciable quantity in the Earth's crust. The decay product uranium-234 is also found. Other isotopes such as uranium-232 have been produced in breeder reactors. In addition to isotopes found in nature or nuclear reactors, many isotopes with far shorter half-lives have been produced, ranging from 215U to 242U (with the exception of 220U and 241U). The standard atomic weight of natural uranium is 238.02891(3). |Standard atomic weight Ar, standard(U)| Naturally occurring uranium is composed of three major isotopes, uranium-238 (99.2739–99.2752% natural abundance), uranium-235 (0.7198–0.7202%), and uranium-234 (0.0050–0.0059%). All three isotopes are radioactive, creating radioisotopes, with the most abundant and stable being uranium-238 with a half-life of 4.4683×109 years (close to the age of the Earth). Uranium-238 is an α emitter, decaying through the 18-member uranium series into lead-206. The decay series of uranium-235 (historically called actino-uranium) has 15 members and ends in lead-207. The constant rates of decay in these series makes comparison of the ratios of parent–to–daughter elements useful in radiometric dating. Uranium-233 is made from thorium-232 by neutron bombardment. The isotope uranium-235 is important for both nuclear reactors and nuclear weapons because it is the only isotope existing in nature to any appreciable extent that is fissile in response to thermal neutrons. The isotope uranium-238 is also important because it absorbs neutrons to produce a radioactive isotope that subsequently decays to the isotope plutonium-239, which also is fissile. List of isotopesEdit |Z||N||Isotopic mass (u) [n 2][n 3] [n 5][n 6] [n 7][n 8] |Natural abundance (mole fraction)| |Excitation energy[n 8]||Normal proportion||Range of variation| |228U||92||136||228.031374(16)||9.1(2) min||α (95%)||224Th||0+| |229U||92||137||229.033506(6)||58(3) min||β+ (80%)||229Pa||(3/2+)| |233U||92||141||233.0396352(29)||1.592(2)×105 y||α||229Th||5/2+||Trace[n 9]| |234U[n 10][n 11]||Uranium II||92||142||234.0409521(20)||2.455(6)×105 y||α||230Th||0+||[0.000054(5)][n 12]||0.000050–| |234mU||1421.32(10) keV||33.5(20) ms||6−| |235U[n 13][n 14][n 15]||Actin Uranium |235mU||0.0765(4) keV||~26 min||IT||235U||1/2+| |236U||Thoruranium||92||144||236.045568(2)||2.342(3)×107 y||α||232Th||0+||Trace[n 16]| |236m1U||1052.89(19) keV||100(4) ns||(4)−| |236m2U||2750(10) keV||120(2) ns||(0+)| |237U||92||145||237.0487302(20)||6.75(1) d||β−||237Np||1/2+||Trace[n 17]| |238U[n 11][n 13][n 14]||Uranium I||92||146||238.0507882(20)||4.468(3)×109 y||α||234Th||0+||[0.992742(10)]||0.992739–| |238mU||2557.9(5) keV||280(6) ns||0+| |239m1U||20(20)# keV||>250 ns||(5/2+)| |239m2U||133.7990(10) keV||780(40) ns||1/2+| |240U||92||148||240.056592(6)||14.1(1) h||β−||240Np||0+||Trace[n 18]| - mU – Excited nuclear isomer. - ( ) – Uncertainty (1σ) is given in concise form in parentheses after the corresponding last digits. - # – Atomic mass marked #: value and uncertainty derived not from purely experimental data, but at least partly from trends from the Mass Surface (TMS). Modes of decay: CD: Cluster decay EC: Electron capture SF: Spontaneous fission - Bold italics symbol as daughter – Daughter product is nearly stable. - Bold symbol as daughter – Daughter product is stable. - ( ) spin value – Indicates spin with weak assignment arguments. - # – Values marked # are not purely derived from experimental data, but at least partly from trends of neighboring nuclides (TNN). - Intermediate decay product of 237Np - Used in uranium–thorium dating - Used in uranium–uranium dating - Intermediate decay product of 238U - Primordial radionuclide - Used in Uranium–lead dating - Important in nuclear reactors - Intermediate decay product of 244Pu, also produced by neutron capture of 235U - Neutron capture product, parent of trace quantities of 237Np - Intermediate decay product of 244Pu Actinides vs fission productsEdit Actinides and fission products by half-life |Actinides by decay chain||Half-life |Fission products of 235U by yield| No fission products |226Ra№||247Bk||1.3 k – 1.6 k| |240Pu||229Th||246Cmƒ||243Amƒ||4.7 k – 7.4 k| |245Cmƒ||250Cm||8.3 k – 8.5 k| |230Th№||231Pa№||32 k – 76 k| |236Npƒ||233Uƒ||234U№||150 k – 250 k||‡||99Tc₡||126Sn| |248Cm||242Pu||327 k – 375 k||79Se₡| |237Npƒ||2.1 M – 6.5 M||135Cs₡||107Pd| |236U||247Cmƒ||15 M – 24 M||129I₡| ... nor beyond 15.7 M years |232Th№||238U№||235Uƒ№||0.7 G – 14.1 G| Legend for superscript symbols Uranium-232 has a half-life of 68.9 years and is a side product in the thorium cycle. It has been cited as an obstacle to nuclear proliferation using 233U as the fissile material, because the intense gamma radiation emitted by 208Tl (a daughter of 232U, produced relatively quickly) makes the 233U contaminated with it more difficult to handle. Uranium-232 is a rare example of an even-even isotope that is fissile with both thermal and fast neutrons. Uranium-233 is a fissile isotope of uranium that is bred from thorium-232 as part of the thorium fuel cycle. Uranium-233 was investigated for use in nuclear weapons and as a reactor fuel; however, it was never deployed in nuclear weapons or used commercially as a nuclear fuel. It has been used successfully in experimental nuclear reactors and has been proposed for much wider use as a nuclear fuel. It has a half-life of 159,200 years. Uranium-233 is produced by the neutron irradiation of thorium-232. When thorium-232 absorbs a neutron, it becomes thorium-233, which has a half-life of only 22 minutes. Thorium-233 decays into protactinium-233 through beta decay. Protactinium-233 has a half-life of 27 days and beta decays into uranium-233; some proposed molten salt reactor designs attempt to physically isolate the protactinium from further neutron capture before beta decay can occur. Uranium-233 usually fissions on neutron absorption but sometimes retains the neutron, becoming uranium-234. The capture-to-fission ratio is smaller than the other two major fissile fuels uranium-235 and plutonium-239; it is also lower than that of short-lived plutonium-241, but bested by very difficult-to-produce neptunium-236. Uranium-234 is an isotope of uranium. In natural uranium and in uranium ore, U-234 occurs as an indirect decay product of uranium-238, but it makes up only 0.0055% (55 parts per million) of the raw uranium because its half-life of just 245,500 years is only about 1/18,000 as long as that of U-238. The path of production of U-234 via nuclear decay is as follows: U-238 nuclei emit an alpha particle to become thorium-234 (Th-234). Next, with a short half-life, a Th-234 nucleus emits a beta particle to become protactinium-234 (Pa-234). Finally, Pa-234 nuclei each emit another beta particle to become U-234 nuclei. Extraction of rather small amounts of U-234 from natural uranium would be feasible using isotope separation, similar to that used for regular uranium-enrichment. However, there is no real demand in chemistry, physics, or engineering for isolating U-234. Very small pure samples of U-234 can be extracted via the chemical ion-exchange process—from samples of plutonium-238 that have been aged somewhat to allow some decay to U-234 via alpha emission. Enriched uranium contains more U-234 than natural uranium as a byproduct of the uranium enrichment process aimed at obtaining U-235, which concentrates lighter isotopes even more strongly than it does U-235. The increased percentage of U-234 in enriched natural uranium is acceptable in current nuclear reactors, but (re-enriched) reprocessed uranium might contain even higher fractions of U-234, which is undesirable. This is because U-234 is not fissile, and tends to absorb slow neutrons in a nuclear reactor—becoming U-235. U-234 has a neutron capture cross-section of about 100 barns for thermal neutrons, and about 700 barns for its resonance integral—the average over neutrons having various intermediate energies. In a nuclear reactor non-fissile isotopes capture a neutron breeding fissile isotopes. U-234 is converted to U-235 more easily and therefore at a greater rate than U-238 is to Pu-239 (via neptunium-239) because U-238 has a much smaller neutron-capture cross-section of just 2.7 barns. Uranium-235 is an isotope of uranium making up about 0.72% of natural uranium. Unlike the predominant isotope uranium-238, it is fissile, i.e., it can sustain a fission chain reaction. It is the only fissile isotope that is a primordial nuclide or found in significant quantity in nature. Uranium-235 has a half-life of 703.8 million years. It was discovered in 1935 by Arthur Jeffrey Dempster. Its (fission) nuclear cross section for slow thermal neutron is about 504.81 barns. For fast neutrons it is on the order of 1 barn. At thermal energy levels, about 5 of 6 neutron absorptions result in fission and 1 of 6 result in neutron capture forming uranium-236. The fission-to-capture ratio improves for faster neutrons. Uranium-236 is an isotope of uranium that is neither fissile with thermal neutrons, nor very good fertile material, but is generally considered a nuisance and long-lived radioactive waste. It is found in spent nuclear fuel and in the reprocessed uranium made from spent nuclear fuel. Uranium-238 (238U or U-238) is the most common isotope of uranium found in nature. It is not fissile, but is a fertile material: it can capture a slow neutron and after two beta decays become fissile plutonium-239. Uranium-238 is fissionable by fast neutrons, but cannot support a chain reaction because inelastic scattering reduces neutron energy below the range where fast fission of one or more next-generation nuclei is probable. Doppler broadening of U-238's neutron absorption resonances, increasing absorption as fuel temperature increases, is also an essential negative feedback mechanism for reactor control. Around 99.284% of natural uranium is uranium-238, which has a half-life of 1.41×1017 seconds (4.468×109 years, or 4.468 billion years). Depleted uranium has an even higher concentration of the U-238 isotope, and even low-enriched uranium (LEU), while having a higher proportion of the uranium-235 isotope (in comparison to depleted uranium), is still mostly U-238. Reprocessed uranium is also mainly U-238, with about as much uranium-235 as natural uranium, a comparable proportion of uranium-236, and much smaller amounts of other isotopes of uranium such as uranium-234, uranium-233, and uranium-232 |Decay mode||Decay energy (MeV)| |Beta decay 20%||1.28| |Beta decay 80%||1.21| |Isotopes of uranium | Complete table of nuclides Uranium-239 is an isotope of uranium. It is usually produced by exposing 238U to neutron radiation in a nuclear reactor. 239U has a half-life of about 23.45 minutes and decays into neptunium-239 through beta decay, with a total decay energy of about 1.29 MeV. The most common gamma decay at 74.660 keV accounts for the difference in the two major channels of beta emission energy, at 1.28 and 1.21 MeV. 239Np further decays to plutonium-239 also through beta decay (239Np has a half-life of about 2.356 days), in a second important step that ultimately produces fissile 239Pu (used in weapons and for nuclear power), from 238U in reactors. - Meija, Juris; et al. (2016). "Atomic weights of the elements 2013 (IUPAC Technical Report)". Pure and Applied Chemistry. 88 (3): 265–91. doi:10.1515/pac-2015-0305. - "Uranium Isotopes". GlobalSecurity.org. Retrieved 14 March 2012. - Half-life, decay mode, nuclear spin, and isotopic composition is sourced in: Audi, Georges; Kondev, Filip G.; Wang, Meng; Huang, Wen Jia; Naimi, Sarah (2017), "The NUBASE2016 evaluation of nuclear properties" (PDF), Chinese Physics C, 41 (3): 030001–1—030001–138, Bibcode:2017ChPhC..41c0001A, doi:10.1088/1674-1137/41/3/030001 - Wang, Meng; Audi, Georges; Kondev, Filip G.; Huang, Wen Jian; Naimi, Sarah; Xu, Xing (2017), "The AME2016 atomic mass evaluation (II). Tables, graphs, and references" (PDF), Chinese Physics C, 41 (3): 030003–1—030003–442, doi:10.1088/1674-1137/41/3/030003 - Y. Wakabayashi; K. Morimoto; D. Kaji; H. Haba; M. Takeyama; S. Yamaki; K. Tanaka; K. Nishio; M. Asai; M. Huang; J. Kanaya; M. Murakami; A. Yoneda; K. Fujita; Y. Narikiyo; T.Tanaka; S.Yamamoto; K. Morita (2014). "New Isotope Candidates, 215U and 216U" (PDF). RIKEN Accel. Prog. Rep. 47: xxii. - H. M. Devaraja; S. Heinz; O. Beliuskina; V. Comas; S. Hofmann; C. Hornung; G. Münzenberg; K. Nishio; D. Ackermann; Y. K. Gambhir; M. Gupta; R. A. Henderson; F. P. Heßberger; J. Khuyagbaatar; B. Kindler; B. Lommel; K. J. Moody; J. Maurer; R. Mann; A. G. Popeko; D. A. Shaughnessy; M. A. Stoyer; A. V. Yeremin (2015). "Observation of new neutron-deficient isotopes with Z ≥ 92 in multinucleon transfer reactions" (PDF). Physics Letters B. 748: 199–203. Bibcode:2015PhLB..748..199D. doi:10.1016/j.physletb.2015.07.006. - Khuyagbaatar, J.; et al. (11 December 2015). "New Short-Lived Isotope 221U and the Mass Surface Near N = 126". Physical Review Letters. 115 (24): 242502. doi:10.1103/PhysRevLett.115.242502. - Trenn, Thaddeus J. (1978). "Thoruranium (U-236) as the extinct natural parent of thorium: The premature falsification of an essentially correct theory". Annals of Science. 35 (6): 581–97. doi:10.1080/00033797800200441. - Plus radium (element 88). While actually a sub-actinide, it immediately precedes actinium (89) and follows a three-element gap of instability after polonium (84) where no nuclides have half-lives of at least four years (the longest-lived nuclide in the gap is radon-222 with a half life of less than four days). Radium's longest lived isotope, at 1,600 years, thus merits the element's inclusion here. - Specifically from thermal neutron fission of U-235, e.g. in a typical nuclear reactor. - Milsted, J.; Friedman, A. M.; Stevens, C. M. (1965). "The alpha half-life of berkelium-247; a new long-lived isomer of berkelium-248". Nuclear Physics. 71 (2): 299. Bibcode:1965NucPh..71..299M. doi:10.1016/0029-5582(65)90719-4. "The isotopic analyses disclosed a species of mass 248 in constant abundance in three samples analysed over a period of about 10 months. This was ascribed to an isomer of Bk248 with a half-life greater than 9 y. No growth of Cf248 was detected, and a lower limit for the β− half-life can be set at about 104 y. No alpha activity attributable to the new isomer has been detected; the alpha half-life is probably greater than 300 y." - This is the heaviest nuclide with a half-life of at least four years before the "Sea of Instability". - Excluding those "classically stable" nuclides with half-lives significantly in excess of 232Th; e.g., while 113mCd has a half-life of only fourteen years, that of 113Cd is nearly eight quadrillion years. - "Uranium 232". Nuclear Power. Archived from the original on 26 February 2019. Retrieved 3 June 2019. - C. W. Forsburg; L. C. Lewis (1999-09-24). "Uses For Uranium-233: What Should Be Kept for Future Needs?" (PDF). Ornl-6952. Oak Ridge National Laboratory. - B. C. Diven; J. Terrell; A. Hemmendinger (1 January 1958). "Capture-to-Fission Ratios for Fast Neutrons in U235". Physical Review Letters. 109: 144–150. Bibcode:1958PhRv..109..144D. doi:10.1103/PhysRev.109.144. - CRC Handbook of Chemistry and Physics, 57th Ed. p. B-345 - CRC Handbook of Chemistry and Physics, 57th Ed. p. B-423 - Isotope masses from: - Isotopic compositions and standard atomic masses from: - de Laeter, John Robert; Böhlke, John Karl; De Bièvre, Paul; Hidaka, Hiroshi; Peiser, H. Steffen; Rosman, Kevin J. R.; Taylor, Philip D. P. (2003). "Atomic weights of the elements. Review 2000 (IUPAC Technical Report)". Pure and Applied Chemistry. 75 (6): 683–800. doi:10.1351/pac200375060683. - Wieser, Michael E. (2006). "Atomic weights of the elements 2005 (IUPAC Technical Report)". Pure and Applied Chemistry. 78 (11): 2051–2066. doi:10.1351/pac200678112051. Lay summary. - Half-life, spin, and isomer data selected from the following sources. - Audi, Georges; Bersillon, Olivier; Blachot, Jean; Wapstra, Aaldert Hendrik (2003), "The NUBASE evaluation of nuclear and decay properties", Nuclear Physics A, 729: 3–128, Bibcode:2003NuPhA.729....3A, doi:10.1016/j.nuclphysa.2003.11.001 - National Nuclear Data Center. "NuDat 2.x database". Brookhaven National Laboratory. - Holden, Norman E. (2004). "11. Table of the Isotopes". In Lide, David R. (ed.). CRC Handbook of Chemistry and Physics (85th ed.). Boca Raton, Florida: CRC Press. ISBN 978-0-8493-0485-9.
- Equation SolverFactoring CalculatorDerivative Calculator Q7: How does the law of sines work? If I'm given two sides and two angles of a triangle, how can I find the remaining side if the Pythagorean Theorem doesn't apply? This is a perfect case of using the Law of Sines. The Pythagorean Theorem won't work because it's not a right triangle. The Law of Sines relates all of the side lengths to the angle measurements of any triangle. We can very easily find the missing side by plugging in some of what we do know: In fact, we can find the other angle as well if we just use the formula again. In general, we only need three pieces of information to find the rest. We either need two angles and a side, or two sides and an angle. You might have to use the Law of Sines several times, but keep solving for missing variables until you can solve for the one you need. It should be noted that sometimes the Law of Sines can give misleading results, leading to angles that are much too large. This will be obvious, because the sum of the angles of the triangle will be greater than 180 degrees. For more information on this subject, look at the Wikipedia article on the Law of Sines. They have a nice graphic that shows the possible ambiguity that can result. However, rest assured that if you numbers look right, they are.
An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. Though al-Khwārizmī's algorism referred to the rules of performing arithmetic using Hindu–Arabic numerals and the systematic solution of linear and quadratic equations, a partial formalization of what would become the modern algorithm began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7 and 1939. Giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. - 1 Word origin - 2 Informal definition - 3 Formalization - 4 Implementation - 5 Computer algorithms - 6 Examples - 7 Algorithmic analysis - 8 Classification - 9 Continuous algorithms - 10 Legal issues - 11 Etymology - 12 History: Development of the notion of "algorithm" - 12.1 Origin - 12.2 Discrete and distinguishable symbols - 12.3 Manipulation of symbols as "place holders" for numbers: algebra - 12.4 Mechanical contrivances with discrete states - 12.5 Mathematics during the 19th century up to the mid-20th century - 12.6 Emil Post (1936) and Alan Turing (1936–37, 1939) - 12.7 J. B. Rosser (1939) and S. C. Kleene (1943) - 12.8 History after 1950 - 13 See also - 14 Notes - 15 References - 16 Further reading - 17 External links An informal definition could be "a set of rules that precisely defines a sequence of operations." which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually. A prototypical example of an algorithm is Euclid's algorithm to determine the maximum common divisor of two integers; an example (there are others) is described by the flow chart above and as an example in a later section. Boolos & Jeffrey (1974, 1999) offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit ...you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary operations on symbols. The term "enumerably infinite" means "countable using integers perhaps extending to infinity." Thus, Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be chosen from 0 to infinity. Thus an algorithm can be an algebraic equation such as y = m + n—two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of (for the addition example): - Precise instructions (in language understood by "the computer") for a fast, efficient, "good" process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities) to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format. The concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related with our customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term. Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform (in a specific order) to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987) and Gurevich (2000): Minsky: "But we will also maintain, with Turing . . . that any procedure which could "naturally" be called effective, can in fact be realized by a (simple) machine. Although this may seem extreme, the arguments . . . in its favor are hard to refute". Gurevich: "...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine ... according to Savage , an algorithm is a computational process defined by a Turing machine". Typically, when an algorithm is associated with processing information, data is read from an input source, written to an output device, and/or stored for further processing. Stored data is regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures. For some such computational process, the algorithm must be rigorously defined: specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be systematically dealt with, case-by-case; the criteria for each case must be clear (and computable). Because an algorithm is a precise list of precise steps, the order of computation is always critical to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top" and going "down to the bottom", an idea that is described more formally by flow of control. So far, this discussion of the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception, and it attempts to describe a task in discrete, "mechanical" means. Unique to this conception of formalized algorithms is the assignment operation, setting the value of a variable. It derives from the intuition of "memory" as a scratchpad. There is an example below of such an assignment. Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in natural language statements. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are often used as a way to define or document algorithms. There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see more at finite state machine, state transition table and control table), as flowcharts and drakon-charts (see more at state diagram), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see more at Turing machine). Representations of algorithms can be classed into three accepted levels of Turing machine description: - 1 High-level description - "...prose to describe an algorithm, ignoring the implementation details. At this level we do not need to mention how the machine manages its tape or head." - 2 Implementation description - "...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level we do not give details of states or transition function." - 3 Formal description - Most detailed, "lowest level", gives the Turing machine's "state table". For an example of the simple algorithm "Add m+n" described in all three levels, see Algorithm examples. Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device. In computer systems, an algorithm is basically an instance of logic written in software by software developers to be effective for the intended "target" computer(s) for the target machines to produce output from given input (perhaps null). "Elegant" (compact) programs, "good" (fast) programs : The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin: - Knuth: ". . .we want good algorithms in some loosely defined aesthetic sense. One criterion . . . is the length of time taken to perform the algorithm . . .. Other criteria are adaptability of the algorithm to computers, its simplicity and elegance, etc" - Chaitin: " . . . a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does" Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant'"—such a proof would solve the Halting problem (ibid). Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is . . . important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The same function may have several different algorithms". Unfortunately there may be a tradeoff between goodness (speed) and elegance (compactness)—an elegant program may take more steps to complete a computation than one less elegant. An example that uses Euclid's algorithm appears below. Computers (and computors), models of computation: A computer (or human "computor") is a restricted type of machine, a "discrete deterministic mechanical device" that blindly follows its instructions. Melzak's and Lambek's primitive models reduced this notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent. Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability". Minsky's machine proceeds sequentially through its five (or six depending on how one counts) instructions unless either a conditional IF–THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution) operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L − 1). Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT. Simulation of an algorithm: computer (computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example". But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computor must know how to take a square root. If they don't then for the algorithm to be effective it must provide a set of rules for extracting a square root. This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor). But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters". When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" (division) instruction available rather than just subtraction (or worse: just Minsky's "decrement"). Structured programming, canonical structures: Per the Church-Turing thesis any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations Turing completeness requires only four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code" a programmer can write structured programs using these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction. Canonical flowchart symbols: The graphical aide called a flowchart offers a way to describe and document an algorithm (and a computer program of one). Like program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only 4: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm-Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles but only if a single exit occurs from the superstructure. The symbols and their use to build the canonical structures are shown in the diagram. One of the simplest algorithms is to find the largest number in a (n unsorted) list of numbers. The solution necessarily requires looking at every number in the list, but only once at each. From this follows a simple algorithm, which can be stated in a high-level description English prose, as: - If there are no numbers in the set then there is no highest number. - Assume the first number in the set is the largest number in the set. - For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set. - When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set. Euclid’s algorithm appears as Proposition II in Book VII ("Elementary Number Theory") of his Elements. Euclid poses the problem: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including 0. And to "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s. In modern words, remainder r = l − q*s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division. For Euclid’s method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be 0, AND (ii) the subtraction must be “proper”, a test must guarantee that the smaller of the two numbers is subtracted from the larger (alternately, the two can be equal so their subtraction yields 0). Euclid's original proof adds a third: the two lengths are not prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another it yields the number "1" for their common measure. So to be precise the following is really Nicomachus' algorithm. Computer language for Euclid's algorithm Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction. - A location is symbolized by upper case letter(s), e.g. S, A, etc. - The varying quantity (number) in a location is written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009. An inelegant program for Euclid's algorithm The following algorithm is framed as Knuth's 4-step version of Euclid's and Nichomachus', but rather than using division to find the remainder it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2–4: 1 [Into two locations L and S put the numbers l and s that represent the two lengths]: INPUT L, S 2 [Initialize R: make the remaining length r equal to the starting/initial/input length l]: R ← L E0: [Ensure r ≥ s.] 3 [Ensure the smaller of the two numbers is in S and the larger in R]: IF R > S THEN the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6: GOTO step 6 ELSE swap the contents of R and S. 4 L ← R (this first step is redundant, but is useful for later discussion). 5 R ← S 6 S ← L E1: [Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R. 7 IF S > R THEN done measuring so GOTO 10 ELSE measure again, 8 R ← R − S 9 [Remainder-loop]: GOTO 7. E2: [Is the remainder 0?]: EITHER (i) the last measure was exact and the remainder in R is 0 program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S. E3: [Interchange s and r]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s:; L serves as a temporary location. 11 L ← R 12 R ← S 13 S ← L 14 [Repeat the measuring process]: GOTO 7 15 [Done. S contains the greatest common divisor]: PRINT S 16 HALT, END, STOP. An elegant program for Euclid's algorithm The following version of Euclid's algorithm requires only 6 core instructions to do what 13 are required to do by "Inelegant"; worse, "Inelegant" requires more types of instructions. The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language the steps are numbered, and the instruction LET = is the assignment instruction symbolized by ←. 5 REM Euclid's algorithm for greatest common divisor 6 PRINT "Type two integers greater than 0" 10 INPUT A,B 20 IF B=0 THEN GOTO 80 30 IF A > B THEN GOTO 60 40 LET B=B-A 50 GOTO 20 60 LET A=A-B 70 GOTO 20 80 PRINT A 90 END How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S ( Difference = Minuend − Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses. Testing the Euclid algorithms Does an algorithm do what its author wants it to do? A few test cases usually suffice to confirm core functionality. One source uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively prime numbers 14157 and 5950. But exceptional cases must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane V rocket failure. Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm". Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof. Measuring and improving the Euclid algorithms Elegance (compactness) versus goodness (speed): With only 6 core instructions, "Elegant" is the clear winner compared to "Inelegant" at 13 instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed. Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved? The compactness of "Inelegant" can be improved by the elimination of 5 steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm; rather, it can only be done heuristically, i.e. by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps together with steps 2 and 3 can be eliminated. This reduces the number of core instructions from 13 to 8, which makes it "more elegant" than "Elegant" at 9 steps. The speed of "Elegant" can be improved by moving the B=0? test outside of the two subtraction loops. This change calls for the addition of 3 instructions (B=0?, A=0?, GOTO). Now "Elegant" computes the example-numbers faster; whether for any given A, B and R, S this is always the case would require a detailed analysis. It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, the sorting algorithm above has a time requirement of O(n), using the big O notation with n as the length of the list. At all times the algorithm only needs to remember two values: the largest number found so far, and its current position in the input list. Therefore it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm usually outperforms a brute force sequential search when used for table lookups on sorted lists. Formal versus empirical The analysis and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware / software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. To illustrate the potential improvements possible even in well established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. There are various ways to classify algorithms, each with its own merits. One way to classify algorithms is by implementation means. - Recursion or iteration - A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. - An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control. The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms has a well-defined change in the algorithm. - Serial, parallel or distributed - Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms, and are called inherently serial problems. - Deterministic or non-deterministic - Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics. - Exact or approximate - While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Approximation may use either a deterministic or a random strategy. Such algorithms have practical value for many hard problems. - Quantum algorithm - They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement. By design paradigm Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories include many different types of algorithms. Some common paradigms are: - Brute-force or exhaustive search - This is the naive method of trying every possible solution to see which is best. - Divide and conquer - A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, that solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of decrease and conquer algorithm is the binary search algorithm. - Search and enumeration - Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking. - Randomized algorithm - Such algorithms make some choices randomly (or pseudo-randomly). They can be very useful in finding approximate solutions for problems where finding exact solutions can be impractical (see heuristic method below). For some of these problems, it is known that the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithms for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms: - Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time) - Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. - Reduction of complexity - This technique involves solving a difficult problem by transforming it into a better known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: - Linear programming - When searching for optimal solutions to a linear function bound to linear equality and inequality constrains, the constrains of the problem can be used directly in producing the optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or more of the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e. the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. - Dynamic programming - When a problem shows optimal substructures — meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems — and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity. - The greedy method - A greedy algorithm is similar to a dynamic programming algorithm in that it works by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications. For some problems they can find the optimal solution while for others they stop at local optima, that is at solutions that cannot be improved by the algorithm but are not optimum. The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem. - The heuristic method - In optimization problems, heuristic algorithms can be used to find a solution close to the optimal solution in cases where finding the optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm. By field of study Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques. Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry, but is now used in solving a broad range of problems in many fields. Algorithms can be classified by the amount of time they need to complete compared to their input size. There is a wide variety: some algorithms complete in linear time relative to input size, some do so in an exponential amount of time or even worse, and some never halt. Additionally, some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them. Burgin (2005, p. 24) uses a generalized definition of algorithms that relaxes the common requirement that the output of the algorithm that computes a function must be determined after a finite number of steps. He defines a super-recursive class of algorithms as "a class of algorithms in which it is possible to compute functions not computable by any Turing machine" (Burgin 2005, p. 107). This is closely related to the study of methods of hypercomputation. By evaluative type To maintain balance while integrating machines into society, one may classify algorithms by the types of evaluation they perform. A number of philosophers have hypothesized that societies benefit from evaluative diversity much as they benefit from diversity of gender and blood type (e.g. Dean 2012, Sober & Wilson 1998, Hertzke & McConkey 1998, and Bellah 1985). Technology could threaten those moral ecosystems like an invasive species if it skews the diversity mix. Wallach & Allen (2008) classified decision-making algorithms into three evaluative types: Bottom-up algorithms make judgments unpredictable to their programmers (e.g. evolving software). All others (top-down) were divided into deontological (which can be relied upon to implement programmed rules) vs. consequentialist (which can be relied upon to maximize a programmed measure). As examples, a standard calculator would be deontological, while machine learning for trading stocks would be consequentialist. Santos-Lang renamed the deontological and consequentialist classes "institutional" and "negotiator" respectively to avoid the implication that all deontological and consequentialist theories of ethics can be implemented as algorithms, and split the bottom-up class into "gadfly" (algorithms which are unpredictable because of their use of randomness generators) vs. "relational" (algorithms which are unpredictable because of network effects). A mutator in evolutionary computation would be an example of a gadfly, while a class 3 or 4 cellular automaton would be an example of a relational machine. Santos-Lang noted that algorithms often have subcomponents of other types. For example, a stock trading negotiator may implement a genetic algorithm, and thus contain gadfly mutators, and mutators may in turn have institutional and relational subcomponents, all computation being relational at the level of underlying chemistry (Santos-Lang 2014). The adjective "continuous" when applied to the word "algorithm" can mean: - An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations—such algorithms are studied in numerical analysis; or - An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer. - See also: Software patents for a general overview of the patentability of software, including computer-implemented algorithms. Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However, practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography). The word "Algorithm", or "Algorism" in some other writing versions, comes from the name al-Khwārizmī, pronounced in classical Arabic as Al-Khwarithmi. Al-Khwārizmī (Persian: خوارزمي, c. 780-850) was a Persian mathematician, astronomer, geographer and a scholar in the House of Wisdom in Baghdad, whose name means "the native of Khwarezm", a city that was part of the Greater Iran during his era and now is in modern day Uzbekistan. About 825, he wrote a treatise in the Arabic language, which was translated into Latin in the 12th century under the title Algoritmi de numero Indorum. This title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name. Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through his other book, the Algebra. In late medieval Latin, algorismus, the corruption of his name, simply meant the "decimal number system" that is still the meaning of modern English algorism. In 17th-century French the word's form, but not its meaning, changed to algorithme. English adopted the French very soon afterwards, but it wasn't until the late 19th century that "Algorithm" took on the meaning that it has in modern English. Alternative etymology claims origin from the terms algebra in its late medieval sense of "Arabic arithmetics" and arithmos the Greek term for number (thus literally meaning "Arabic numbers" or "Arabic calculation"). Algorithms of Al-Kharizmi's works are not meant in their modern sense but as a type of repetitive calculus (here is to mention that his fundamental work known as algebra was originally titled "The Compendious Book on Calculation by Completion and Balancing" describing types of repetitive calculation and quadratic equations). In that sense, algorithms were known in Europe long before Al-Kharizmi. The oldest algorithm known today is the Euclidean algorithm (see also Extended Euclidean algorithm). Before the coining of the term algorithm the Greeks were calling them anthyphairesis literally meaning anti-subtraction or reciprocal subtraction (further reading here and here). Algorithms were known to the Greeks centuries before Euclid. Instead of the word algebra the Greeks were using the term arithmetica (ἀριθμητική, i.e. in Diophantus' works the so-called "father of Algebra" - see also Wikipedia's articles Diophantine equation and Eudoxos). History: Development of the notion of "algorithm" The word algorithm comes from the name of the 9th century Persian mathematician Abu Abdullah Muhammad ibn Musa Al-Khwarizmi, whose work built upon that of the 7th-century Indian mathematician Brahmagupta. The word algorism originally referred only to the rules of performing arithmetic using Hindu–Arabic numerals but evolved via European Latin translation of Al-Khwarizmi's name into algorithm by the 18th century. The use of the word evolved to include all definite procedures for solving problems or performing tasks. Discrete and distinguishable symbols Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks, or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations. Manipulation of symbols as "place holders" for numbers: algebra The work of the ancient Greek geometers (Euclidean algorithm), the Indian mathematician Brahmagupta, and the Persian mathematician Al-Khwarizmi (from whose name the terms "algorism" and "algorithm" are derived), and Western European mathematicians culminated in Leibniz's notion of the calculus ratiocinator (ca 1680): A good century and a half ahead of his time, Leibniz proposed an algebra of logic, an algebra that would specify the rules for manipulating logical concepts in the manner that ordinary algebra specifies the rules for manipulating numbers. Mechanical contrivances with discrete states The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular the verge escapement that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th century. Lovelace is credited with the first creation of an algorithm intended for processing on a computer - Babbage's analytical engine, the first device considered a real Turing-complete computer instead of just a calculator - and is sometimes called "history's first programmer" as a result, though a full implementation of Babbage's second device would not be realized until decades after her lifetime. Logical machines 1870—Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations when presented in a form similar to what are now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically . . . More recently however I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc] . . .". With this machine he could analyze a "syllogism or any other simple logical argument". This machine he displayed in 1870 before the Fellows of the Royal Society. Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine". Jacquard loom, Hollerith punch cards, telegraphy and telephony—the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers. By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (ca 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter (ca. 1910) with its punched-paper use of Baudot code on tape. Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device". Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" open and closed): - It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machines were built having the scope Babbage had envisioned." Mathematics during the 19th century up to the mid-20th century Symbols and rules: In rapid succession the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language". But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a " 'formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules". The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913). The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox. The resultant considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers. Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene. Church's proof that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction. Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine". S. C. Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I", and a few years later Kleene's renaming his Thesis "Church's Thesis" and proposing "Turing's Thesis". Emil Post (1936) and Alan Turing (1936–37, 1939) Here is a remarkable coincidence of two men not knowing each other but describing a process of men-as-computers working on computations—and they yield virtually identical definitions. Emil Post (1936) described the actions of a "computer" (human being) as follows: - "...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions. His symbol space would be - "a two way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke. - "One box is to be singled out and called the starting point. ...a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes.... - "A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process terminates only when it comes to the direction of type (C ) [i.e., STOP]". See more at Post–Turing machine Alan Turing's work preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'". Given the prevalence of Morse code and telegraphy, ticker tape machines, and teletypewriters we might conjecture that all were influences. Turing—his model of computation is now called a Turing machine—begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers. - "Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book....I assume then that the computation is carried out on one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite.... - "The behaviour of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite... - "Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided." Turing's reduction yields the following: - "The simple operations must therefore include: - "(a) Changes of the symbol on one of the observed squares - "(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares. "It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must therefore be taken to be one of the following: - "(A) A possible change (a) of symbol together with a possible change of state of mind. - "(B) A possible change (b) of observed squares, together with a possible change of state of mind" - "We may now construct a machine to do the work of this computer." A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it: - "A function is said to be "effectively calculable" if its values can be found by some purely mechanical process. Though it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition . . . [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing and Post] . . . We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability . . . . - "† We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions". J. B. Rosser (1939) and S. C. Kleene (1943) J. Barkley Rosser defined an 'effective [mathematical] method' in the following manner (italicization added): - "'Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–6) Rosser's footnote #5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion in particular Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–7) in their mechanism-models of computation. - "12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"" (Kleene 1943:273) History after 1950 A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind (especially arguments around artificial intelligence). For more, see Algorithm characterizations. - Abstract machine - Algorithm engineering - Algorithmic composition - Algorithmic synthesis - Algorithmic trading - Garbage in, garbage out - Introduction to Algorithms - List of algorithm general topics - List of important publications in theoretical computer science - Algorithms - Numerical Mathematics Consortium - Theory of computation - "Any classical mathematical algorithm, for example, can be described in a finite number of English words" (Rogers 1987:2). - Well defined with respect to the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the instructions and carry out the computations" (Rogers 1987:2). - "an algorithm is a procedure for computing a function (with respect to some chosen notation for integers) ... this limitation (to numerical functions) results in no loss of generality", (Rogers 1987:1). - "An algorithm has zero or more inputs, i.e., quantities which are given to it initially before the algorithm begins" (Knuth 1973:5). - "A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'" (Knuth 1973:5). - "An algorithm has one or more outputs, i.e. quantities which have a specified relation to the inputs" (Knuth 1973:5). - Whether or not a process with random interior processes (not including the input) is an algorithm is debatable. Rogers opines that: "a computation is carried out in a discrete stepwise fashion, without use of continuous methods or analogue devices . . . carried forward deterministically, without resort to random methods or devices, e.g., dice" Rogers 1987:2. - Kleene 1943 in Davis 1965:274 - Rosser 1939 in Davis 1965:225 - Moschovakis, Yiannis N. (2001). "What is an algorithm?". In Engquist, B.; Schmid, W. Mathematics Unlimited — 2001 and beyond. Springer. pp. 919–936 (Part II). ISBN 9783540669135. - Hogendijk, Jan P. (1998). "al-Khwarzimi". Pythagoras 38 (2): 4–5. ISSN 0033-4766.[dead link] - Oaks, Jeffrey A. "Was al-Khwarizmi an applied algebraist?". University of Indianapolis. Retrieved 2008-05-30. - Stone 1973:4 - Stone simply requires that "it must terminate in a finite number of steps" (Stone 1973:7–8). - Boolos and Jeffrey 1974,1999:19 - cf Stone 1972:5 - Knuth 1973:7 states: "In practice we not only want algorithms, we want good algorithms ... one criterion of goodness is the length of time taken to perform the algorithm ... other criteria are the adaptability of the algorithm to computers, its simplicity and elegance, etc." - cf Stone 1973:6 - Stone 1973:7–8 states that there must be, "...a procedure that a robot [i.e., computer] can follow in order to determine precisely how to obey the instruction." Stone adds finiteness of the process, and definiteness (having no ambiguity in the instructions) to this definition. - Knuth, loc. cit - Minsky 1967:105 - Gurevich 2000:1, 3 - Sipser 2006:157 - Knuth 1973:7 - Chaitin 2005:32 - Rogers 1987:1–2 - In his essay "Calculations by Man and Machine: Conceptual Analysis" Seig 2002:390 credits this distinction to Robin Gandy, cf Wilfred Seig, et al., 2002 Reflections on the foundations of mathematics: Essays in honor of Solomon Feferman, Association for Symbolic Logic, A. K Peters Ltd, Natick, MA. - cf Gandy 1980:126, Robin Gandy Church's Thesis and Principles for Mechanisms appearing on pp. 123–148 in J. Barwise et al. 1980 The Kleene Symposium, North-Holland Publishing Company. - A "robot": "A computer is a robot that performs any task that can be described as a sequence of instructions." cf Stone 1972:3 - Lambek’s "abacus" is a "countably infinite number of locations (holes, wires etc.) together with an unlimited supply of counters (pebbles, beads, etc). The locations are distinguishable, the counters are not". The holes have unlimited capacity, and standing by is an agent who understands and is able to carry out the list of instructions" (Lambek 1961:295). Lambek references Melzak who defines his Q-machine as "an indefinitely large number of locations . . . an indefinitely large supply of counters distributed among these locations, a program, and an operator whose sole purpose is to carry out the program" (Melzak 1961:283). B-B-J (loc. cit.) add the stipulation that the holes are "capable of holding any number of stones" (p. 46). Both Melzak and Lambek appear in The Canadian Mathematical Bulletin, vol. 4, no. 3, September 1961. - If no confusion results, the word "counters" can be dropped, and a location can be said to contain a single "number". - "We say that an instruction is effective if there is a procedure that the robot can follow in order to determine precisely how to obey the instruction." (Stone 1972:6) - cf Minsky 1967: Chapter 11 "Computer models" and Chapter 14 "Very Simple Bases for Computability" pp. 255–281 in particular - cf Knuth 1973:3. - But always preceded by IF–THEN to avoid improper subtraction. - However, a few different assignment instructions (e.g. DECREMENT, INCREMENT and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is a convenience; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx is unconditional. - Knuth 1973:4 - Stone 1972:5. Methods for extracting roots are not trivial: see Methods of computing square roots. - Leeuwen, Jan (1990). Handbook of Theoretical Computer Science: Algorithms and complexity. Volume A. Elsevier. p. 85. ISBN 978-0-444-88071-0. - John G. Kemeny and Thomas E. Kurtz 1985 Back to Basic: The History, Corruption, and Future of the Language, Addison-Wesley Publishing Company, Inc. Reading, MA, ISBN 0-201-13433-0. - Tausworthe 1977:101 - Tausworthe 1977:142 - Knuth 1973 section 1.2.1, expanded by Tausworthe 1977 at pages 100ff and Chapter 9.1 - cf Tausworthe 1977 - Heath 1908:300; Hawking’s Dover 2005 edition derives from Heath. - " 'Let CD, measuring BF, leave FA less than itself.' This is a neat abbreviation for saying, measure along BA successive lengths equal to CD until a point F is reached such that the length FA remaining is less than CD; in other words, let BF be the largest exact multiple of CD contained in BA" (Heath 1908:297 - For modern treatments using division in the algorithm see Hardy and Wright 1979:180, Knuth 1973:2 (Volume 1), plus more discussion of Euclid's algorithm in Knuth 1969:293-297 (Volume 2). - Euclid covers this question in his Proposition 1. - "Euclid's Elements, Book VII, Proposition 2". Aleph0.clarku.edu. Retrieved May 20, 2012. - Knuth 1973:13–18. He credits "the formulation of algorithm-proving in terms of asertions and induction" to R. W. Floyd, Peter Naur, C. A. R. Hoare, H. H. Goldstine and J. von Neumann. Tausworth 1977 borrows Knuth's Euclid example and extends Knuth's method in section 9.1 Formal Proofs (pages 288–298). - Tausworthe 1997:294 - cf Knuth 1973:7 (Vol. I), and his more-detailed analyses on pp. 1969:294-313 (Vol II). - Breakdown occurs when an algorithm tries to compact itself. Success would solve the Halting problem. - Gillian Conahan (January 2013). "Better Math Makes Faster Data Networks". discovermagazine.com. - Haitham Hassanieh, Piotr Indyk, Dina Katabi, and Eric Price, "ACM-SIAM Symposium On Discrete Algorithms (SODA), Kyoto, January 2012. See also the sFFT Web Page. - Kowalski 1979 - Carroll, Sue; Daughtrey, Taz (July 4, 2007). Fundamental Concepts for the Software Quality Engineer. American Society for Quality. pp. 282 et seq. ISBN 978-0-87389-720-4. - For instance, the volume of a convex polytope (described using a membership oracle) can be approximated to high accuracy by a randomized polynomial time algorithm, but not by a deterministic one: see Dyer, Martin; Frieze, Alan; Kannan, Ravi (January 1991), "A Random Polynomial-time Algorithm for Approximating the Volume of Convex Bodies", J. ACM (New York, NY, USA: ACM) 38 (1): 1–17, doi:10.1145/102782.102783. - George B. Dantzig and Mukund N. Thapa. 2003. Linear Programming 2: Theory and Extensions. Springer-Verlag. - Tsypkin (1971). Adaptation and learning in automatic systems. Academic Press. p. 54. ISBN 978-0-08-095582-7. - Brezina, Corona (2006). Al-Khwarizmi: The Inventor Of Algebra. The Rosen Publishing Group. ISBN 978-1-4042-0513-0. - Foremost mathematical texts in history, according to Carl B. Boyer. - Etymology of algorithm at Dictionary.Reference.com - Becker O (1933). "Eudoxus-Studien I. Eine voreuklidische Proportionslehre und ihre Spuren bei Aristoteles und Euklid". Quellen und Studien zur Geschichte der Mathematik B 2: 311–333. - "History of Algorithms and Algorithmics". Scriptol.com. Retrieved November 7, 2012. - Davis 2000:18 - Bolter 1984:24 - Bolter 1984:26 - Bolter 1984:33–34, 204–206. - All quotes from W. Stanley Jevons 1880 Elementary Lessons in Logic: Deductive and Inductive, Macmillan and Co., London and New York. Republished as a googlebook; cf Jevons 1880:199–201. Louis Couturat 1914 the Algebra of Logic, The Open Court Publishing Company, Chicago and London. Republished as a googlebook; cf Couturat 1914:75–76 gives a few more details; interestingly he compares this to a typewriter as well as a piano. Jevons states that the account is to be found at Jan . 20, 1870 The Proceedings of the Royal Society. - Jevons 1880:199–200 - All quotes from John Venn 1881 Symbolic Logic, Macmillan and Co., London. Republished as a googlebook. cf Venn 1881:120–125. The interested reader can find a deeper explanation in those pages. - Bell and Newell diagram 1971:39, cf. Davis 2000 - * Melina Hill, Valley News Correspondent, A Tinkerer Gets a Place in History, Valley News West Lebanon NH, Thursday March 31, 1983, page 13. - Davis 2000:14 - van Heijenoort 1967:81ff - van Heijenoort's commentary on Frege's Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought in van Heijenoort 1967:1 - Dixon 1906, cf. Kleene 1952:36–40 - cf. footnote in Alonzo Church 1936a in Davis 1965:90 and 1936b in Davis 1965:110 - Kleene 1935–6 in Davis 1965:237ff, Kleene 1943 in Davis 1965:255ff - Church 1936 in Davis 1965:88ff - cf. "Formulation I", Post 1936 in Davis 1965:289–290 - Turing 1936–7 in Davis 1965:116ff - Rosser 1939 in Davis 1965:226 - Kleene 1943 in Davis 1965:273–274 - Kleene 1952:300, 317 - Kleene 1952:376 - Turing 1936–7 in Davis 1965:289–290 - Turing 1936 in Davis 1965, Turing 1939 in Davis 1965:160 - Hodges, p. 96 - Turing 1936–7:116 - Turing 1936–7 in Davis 1965:136 - Turing 1939 in Davis 1965:160 - Axt, P. (1959) On a Subrecursive Hierarchy and Primitive Recursive Degrees, Transactions of the American Mathematical Society 92, pp. 85–105 - Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. ISBN 0-07-004357-4. - Bellah, Robert Neelly (1985). Habits of the Heart: Individualism and Commitment in American Life. Berkeley: University of California Press. ISBN 978-0-520-25419-0. - Blass, Andreas; Gurevich, Yuri (2003). "Algorithms: A Quest for Absolute Definitions". Bulletin of European Association for Theoretical Computer Science 81. Includes an excellent bibliography of 56 references. - Boolos, George; Jeffrey, Richard (1974, 1999). Computability and Logic (4th ed.). Cambridge University Press, London. ISBN 0-521-20402-X. : cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable". - Burgin, Mark (2004). Super-Recursive Algorithms. Springer. ISBN 978-0-387-95569-8. - Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109 - Church, Alonzo (1936a). "An Unsolvable Problem of Elementary Number Theory". The American Journal of Mathematics 58 (2): 345–363. doi:10.2307/2371045. JSTOR 2371045. Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc. - Church, Alonzo (1936b). "A Note on the Entscheidungsproblem". The Journal of Symbolic Logic 1 (1): 40–41. doi:10.2307/2269326. JSTOR 2269326. Church, Alonzo (1936). "Correction to a Note on the Entscheidungsproblem". The Journal of Symbolic Logic 1 (3): 101–102. doi:10.2307/2269030. JSTOR 2269030. Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes. - Daffa', Ali Abdullah al- (1977). The Muslim contribution to mathematics. London: Croom Helm. ISBN 0-85664-464-1. - Davis, Martin (1965). The Undecidable: Basic Papers On Undecidable Propositions, Unsolvable Problems and Computable Functions. New York: Raven Press. ISBN 0-486-43228-9. Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name. - Davis, Martin (2000). Engines of Logic: Mathematicians and the Origin of the Computer. New York: W. W. Nortion. ISBN 0-393-32229-7. Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc. - Paul E. Black, algorithm at the NIST Dictionary of Algorithms and Data Structures. - Dean, Tim (2012). "Evolution and moral diversity". Baltic International Yearbook of Cognition, Logic and Communication 7. - Dennett, Daniel (1995). Darwin's Dangerous Idea. New York: Touchstone/Simon & Schuster. ISBN 0-684-80290-2. - Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pages 77–111. Includes bibliography of 33 sources. - Hertzke, Allen D.; McRorie, Chris (1998). "The Concept of Moral Ecology". In Lawler, Peter Augustine; McConkey, Dale. Community and Political Thought Today. Westort, CT: Praeger. - Kleene, Stephen C. (1936). "General Recursive Functions of Natural Numbers". Mathematische Annalen 112 (5): 727–742. doi:10.1007/BF01565439. Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result). - Kleene, Stephen C. (1943). "Recursive Predicates and Quantifiers". American Mathematical Society Transactions 54 (1): 41–73. doi:10.2307/1990131. JSTOR 1990131. Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis). - Kleene, Stephen C. (1952). Introduction to Metamathematics (First ed.). North-Holland Publishing Company. ISBN 0-7204-2103-9. Excellent—accessible, readable—reference source for mathematical "foundations". - Knuth, Donald (1997). Fundamental Algorithms, Third Edition. Reading, Massachusetts: Addison–Wesley. ISBN 0-201-89683-4. - Knuth, Donald (1969). Volume 2/Seminumerical Algorithms, The Art of Computer Programming First Edition. Reading, Massachusetts: Addison–Wesley. - Kosovsky, N. K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981 - Kowalski, Robert (1979). "Algorithm=Logic+Control". Communications of the ACM 22 (7): 424–436. doi:10.1145/359131.359136. - A. A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.] - Minsky, Marvin (1967). Computation: Finite and Infinite Machines (First ed.). Prentice-Hall, Englewood Cliffs, NJ. ISBN 0-13-165449-7. Minsky expands his "...idea of an algorithm—an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines. - Post, Emil (1936). "Finite Combinatory Processes, Formulation I". The Journal of Symbolic Logic 1 (3): 103–105. doi:10.2307/2269031. JSTOR 2269031. Reprinted in The Undecidable, p. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis. - Rogers, Jr, Hartley (1987). Theory of Recursive Functions and Effective Computability. The MIT Press. ISBN 0-262-68052-1. - Rosser, J.B. (1939). "An Informal Exposition of Proofs of Godel's Theorem and Church's Theorem". Journal of Symbolic Logic 4. Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable) - Santos-Lang, Christopher (2014). "Chapter 6: Moral Ecology Approaches" (PDF). In van Rysewyk, Simon; Pontier, Matthijs. Machine Medical Ethics. New York: Springer. pp. 74–96. - Scott, Michael L. (2009). Programming Language Pragmatics (3rd ed.). Morgan Kaufmann Publishers/Elsevier. ISBN 978-0-12-374514-9. - Sipser, Michael (2006). Introduction to the Theory of Computation. PWS Publishing Company. ISBN 0-534-94728-X. - Sober, Elliott; Wilson, David Sloan (1998). Unto Others: The Evolution and Psychology of Unselfish Behavior. Cambridge: Harvard University Press. - Stone, Harold S. (1972). Introduction to Computer Organization and Data Structures (1972 ed.). McGraw-Hill, New York. ISBN 0-07-061726-0. Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4). - Tausworthe, Robert C (1977). Standardized Development of Computer Software Part 1 Methods. Englewood Cliffs NJ: Prentice-Hall, Inc. ISBN 0-13-842195-1. - Turing, Alan M. (1936–7). "On Computable Numbers, With An Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society, Series 2 42: 230–265. doi:10.1112/plms/s2-42.1.230. . Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK. - Turing, Alan M. (1939). "Systems of Logic Based on Ordinals". Proceedings of the London Mathematical Society 45: 161–228. doi:10.1112/plms/s2-45.1.161. Reprinted in The Undecidable, p. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton USA. - Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press. ISBN 978-0-19-537404-9. - United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms: 2100 Patentability, Manual of Patent Examining Procedure (MPEP). Latest revision August 2006 - Bolter, David J. (1984). Turing's Man: Western Culture in the Computer Age (1984 ed.). The University of North Carolina Press, Chapel Hill NC. ISBN 0-8078-1564-0., ISBN 0-8078-4108-0 pbk. - Dilson, Jesse (2007). The Abacus ((1968,1994) ed.). St. Martin's Press, NY. ISBN 0-312-10409-X., ISBN 0-312-10409-X (pbk.) - van Heijenoort, Jean (2001). From Frege to Gödel, A Source Book in Mathematical Logic, 1879–1931 ((1967) ed.). Harvard University Press, Cambridge, MA. ISBN 0-674-32449-8., 3rd edition 1976[?], ISBN 0-674-32449-8 (pbk.) - Hodges, Andrew (1983). Alan Turing: The Enigma ((1983) ed.). Simon and Schuster, New York. ISBN 0-671-49207-1., ISBN 0-671-49207-1. Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof. - Jean Luc Chabert (1999). A History of Algorithms: From the Pebble to the Microchip. Springer Verlag. ISBN 978-3-540-63369-3. - Algorithmics.: The Spirit of Computing. Addison-Wesley. 2004. ISBN 978-0-321-11784-7. - Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms. Stanford, California: Center for the Study of Language and Information. - Knuth, Donald E. (2010). Selected Papers on Design of Algorithms. Stanford, California: Center for the Study of Language and Information. - Berlinski, David (2001). The Advent of the Algorithm: The 300-Year Journey from an Idea to the Computer. Harvest Books. ISBN 978-0-15-601391-8. - Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein (2009). Introduction To Algorithms, Third Edition. MIT Press. ISBN 978-0262033848. |Look up algorithm in Wiktionary, the free dictionary.| |Wikibooks has a book on the topic of: Algorithms| |At Wikiversity, you can learn more and teach others about Algorithm at the Department of Algorithm| - Hazewinkel, Michiel, ed. (2001), "Algorithm", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Algorithms at DMOZ - Weisstein, Eric W., "Algorithm", MathWorld. - Dictionary of Algorithms and Data Structures—National Institute of Standards and Technology - Algorithms and Data Structures by Dr Nikolai Bezroukov - Algorithm repositories - The Stony Brook Algorithm Repository—State University of New York at Stony Brook - Netlib Repository—University of Tennessee and Oak Ridge National Laboratory - Collected Algorithms of the ACM—Association for Computing Machinery - The Stanford GraphBase—Stanford University - Combinatorica—University of Iowa and State University of New York at Stony Brook - Library of Efficient Datastructures and Algorithms (LEDA)—previously from Max-Planck-Institut für Informatik - Archive of Interesting Code - A semantic wiki to collect, categorize and relate all algorithms and data structures - Lecture notes
In geometry, a circular segment (symbol: ⌓), also known as a disk segment, is a region of a disk which is "cut off" from the rest of the disk by a secant or a chord. More formally, a circular segment is a region of two-dimensional space that is bounded by a circular arc (of less than π radians by convention) and by the circular chord connecting the endpoints of the arc. Let R be the radius of the arc which forms part of the perimeter of the segment, θ the central angle subtending the arc in radians, c the chord length, s the arc length, h the sagitta (height) of the segment, and a the area of the segment. Usually, chord length and height are given or measured, and sometimes the arc length as part of the perimeter, and the unknowns are area and sometimes arc length. These can't be calculated simply from chord length and height, so two intermediate quantities, the radius and central angle are usually calculated first. Radius and central angleEdit The radius is: The central angle is Chord length and heightEdit The chord length and height can be back-computed from radius and central angle by: The chord length is The sagitta is Arc length and areaEdit The arc length, from the familiar geometry of a circle, is The area a of the circular segment is equal to the area of the circular sector minus the area of the triangular portion (using the double angle formula to get an equation in terms of ): In terms of R and h, Unfortunately, is a transcendental function of and so no algebraic formula in terms of these can be stated. But what can be stated is that as the central angle gets smaller (or alternately the radius gets larger), the area a rapidly and asymptotically approaches . If , is a substantially good approximation. As the central angle approaches π, the area of the segment is converging to the area of a semicircle, , so a good approximation is a delta offset from the latter area: - for h>.75R As an example, the area is one quarter the circle when θ ~ 2.31 radians (132.3°) corresponding to a height of ~59.6% and a chord length of ~183% of the radius.[clarification needed] The perimeter p is the arclength plus the chord length, As a proportion of the whole area of the disc, , you have The area formula can be used in calculating the volume of a partially-filled cylindrical tank laying horizontally. In the design of windows or doors with rounded tops, c and h may be the only known values and can be used to calculate R for the draftsman's compass setting. One can reconstruct the full dimensions of a complete circular object from fragments by measuring the arc length and the chord length of the fragment. To check hole positions on a circular pattern. Especially useful for quality checking on machined products. For calculating the area or centroid of a planar shape that contains circular segments. - The fundamental relationship between R, c, and h derivable directly from the Pythagorean theorem among R, C/2 and r-h components of a right-angled triangle is: which may be solved for R, c, or h as required.
Build a Relay Inspired by Space Communications In this intermediate-level programming activity, students will learn about light, mirrors, and optics while modeling a new technique NASA is using to communicate with spacecraft. Students will use microdevices along with light and mirrors to build a relay that can send information to a distant detector and program their detector to indicate when data is being received. - This activity requires intermediate-level knowledge of programming languages. Students should be familiar with how to block code external sensors or import libraries for external sensors using Python. - Reiterate safe practices when using laser pointers with students. Implementing a rule such as, “Laser pointers should never leave the table,” can help prevent behavior issues. Low-powered lasers are sufficient. - Consider keeping groups smaller than usual for this activity – no more than three students per group – if materials allow for it. This will keep all students engaged. Clearly defined student roles, such as programmer, optical engineer, and relay construction/positioning, may also help with participation. Communicating with spacecraft across the solar system means sending data over enormous distances, with travel times limited by the speed of light. Presently, we communicate with spacecraft via light in the form of radio waves. To send and collect spacecraft data – such as commands, images, measurements, and status reports – we use a system called the Deep Space Network – an array of antennas located in the United States, Spain, and Australia. These arrays, situated about 120 degrees apart, allow us to keep in constant communication with distant spacecraft as Earth rotates. The Deep Space Network is one of two networks in NASA's SCaN, or Space Communications and Navigation, program. However, as space exploration advances, the Deep Space Network needs to communicate with an ever-increasing number of active missions collecting more and more data. The increasing volume and complexity of data is outpacing the number of radio antennas in the network. So engineers at NASA’s Jet Propulsion Laboratory are exploring the use of a different frequency of light for spacecraft communications. While the speed of light is constant, it is possible to change the frequency of light at which data is received so that more information can be transmitted per second. Over the years, NASA has increased the frequency at which data is sent from spacecraft. This is because at higher frequencies, the beam that comes from the spacecraft’s antenna is narrower and more of the transmitted energy is focused on the Earth antenna. In fact, switching from the longer wavelengths of radio to the shorter wavelengths of infrared light (IR) increases this effect. Using IR light, the frequency increases as much as 10,000-fold, which could result in 100 times more data being transmitted. This new form of spacecraft communication, called Deep Space Optical Communications, will use a focused beam of light to transmit information. While the optical beam is more effective at maintaining its strength over greater distances than radio waves, it will require very fine directional positioning to accurately reach its target. A relay system made up of small satellites, or CubeSats, could help by serving as a pathway of intermediary nodes, directing the beam to its target. NASA already piloted deep space optical communications between Earth and the Moon, and there are plans to test it again during the Psyche mission, launching in 2023. Setup and Programming - Using the desired microdevice (LEGO, Raspberry Pi, Cubit, etc.) have students begin by assembling a simple light sensor. Depending on your students’ background knowledge, this may entail using block code or the Python programming language. - Attach and program an indicator that indicates when a signal from the original light beam is being received by the sensor. Sample codes for both Python and block code are provided below but will vary by device. Block Code Example - Test the detection code and the threshold for triggering the receiver. Revise the code to be certain that the signal is received above the baseline of light in the classroom only. Constructing the Relay - Introduce only one mirror to act as a reflective relay between the light beam and the receiver. Have students familiarize themselves with how even slight movements of the laser pointer will result in difficulty aiming the beam at the sensor. - Ensure that the threshold of the light sensor is still satisfactory. Revise as necessary. - Introduce a second and third relay mirror to the system. Have students document how the intensity of light changes, if at all, as more and more relays are incorporated. Increase the distance between relays and repeat these observations. Does a device with more relays closer together or fewer relays farther apart capture more light? - What are the advantages and disadvantages of radio waves and focused light beams? How did those challenges play out in designing the relay? - What are the advantages and disadvantages of having multiple relays between the source of the beam and the sensor? Consider more than just the experiment at hand. How would this be implemented in communications between Earth and Mars? Jupiter? Beyond? - Refer to the engineering rubric. - Incorporate peer reviews for the design and code of the microdevices as part of the assessment to promote student cross-communication. - Consider differentiation strategies for tailoring the difficulty of the assignment – perhaps by varying the distance between the relay and the sensor, changing the number of relays, and even adding physical obstacles that must be circumvented. - The difficulty can be further increased by having students consider the movement of planets as they orbit the Sun. How can relays be placed to ensure information can be transmitted regardless of where two planets are relative to each other and the Sun? - If your students are familiar with lenses and optics, try including lenses in addition to mirrors. Have students explain how their beam is affected when passing through various materials. Does it help or hinder in delivering the light beam to its target? How could actual relays draw upon these effects to communicate with us back on Earth? Catching a Whisper from Space Students kinesthetically model the mathematics of how NASA communicates with spacecraft. Time 1-2 hrs Collecting Light: Inverse Square Law Demo In this activity, students learn how light and energy are spread throughout space. The rate of change can be expressed mathematically, demonstrating why spacecraft like NASA’s Juno need so many solar panels. Time < 30 mins
Beginning & intermediate algebra Here, we will show you how to work with Beginning & intermediate algebra. We will give you answers to homework. The Best Beginning & intermediate algebra Here, we debate how Beginning & intermediate algebra can help students learn Algebra. Composition of functions solver is a mathematical tool that allows two or more functions to be composed into a single function. The process of composition is relatively simple: the output of one function is used as the input for the next function in the sequence. Composition of functions solver can be used to solve problems in a variety of fields, including physics, engineering, and economics. In each case, the goal is to find a way to simplify a complex problem by breaking it down into smaller, more manageable pieces. Composition of functions solver is an essential tool for anyone working with complex systems. The roots of the equation are then found by solving the Quadratic Formula. The parabola solver then plots the points on a graph and connecting them to form a parabola. Finally, the focus and directrix of the parabola are found using the standard form of the equation (y = a(x-h)^2 + k). Basic mathematics is the study of mathematics that is necessary for everyday life. It includes topics such as addition, subtraction, multiplication, and division. Basic mathematics also covers fractions, decimals, and percents. Basic mathematics is an important subject because it helps us to understand the world around us. It is used in everyday life, such as when we cook or do laundry. Basic mathematics is also used in more complicated situations, such as budgeting or investing. By understanding basic mathematics, we can make better decisions in all areas of our lives. Solving for x logarithms can be difficult, but there are a few methods that can help. One method is to use the change of base formula. This formula states that if you have two values with the same base, you can set them equal to each other and solve for the unknown value. For example, if you have the equation log4(x)=log2(x), you can set the two equations equal to each other and solve for x. Another method is to use graphing calculator. Many graphing calculators have a built-in function that allows you to solve for x logarithms. Simply enter the equation into the calculator and press the "solve" button. The calculator will then give you the value of x. Finally, you can also use a table of logarithms to solve for x logarithms. To do this, simply find the values of x and y that are equal to each other and solve for x. Solving for x logarithms can be difficult, but with a little practice, it can be easy. We will support you with math difficulties This is literally the most helpful thing. It helped me countless times with homework and learning for tests. It has detailed explanations and clear and understandable formulas. Nice app my math problem easily solves but English not learn English learning app create Life saver and a good teacher/helper. When I get a math problem or equation that I don't understand I click on the show steps to help guide me into solving it next time. I'm glad it shows more than just the answers.
The Kessler syndrome (also called the Kessler effect, collisional cascading, or ablation cascade), proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low Earth orbit (LEO) is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions. One implication is that the distribution of debris in orbit could render space activities and the use of satellites in specific orbital ranges difficult for many generations. Debris generation and destruction Every satellite, space probe, and crewed mission has the potential to produce space debris. A cascading Kessler syndrome becomes more likely as satellites in orbit increase in number. As of 2014[update], there were about 2,000 commercial and government satellites orbiting the earth. It is estimated that there are 600,000 pieces of space junk ranging from 1 cm to 10 cm, and on average one satellite is destroyed each year. The most commonly used orbits for both manned and unmanned space vehicles are low earth orbits, which cover an altitude range low enough for residual atmospheric drag to be sufficient to help keep the zone clear. Collisions that occur in this altitude range are also less of an issue because the directions into which the fragments fly and/or their lower specific energy often result in orbits intersecting with Earth or having perigee below this altitude. Orbital decay is much slower at altitudes where atmospheric drag is insignificant. Slight atmospheric drag, lunar perturbation, and solar wind drag can gradually bring debris down to lower altitudes where fragments finally re-enter, but this process can take millennia at very high altitudes. The Kessler syndrome is troublesome because of the domino effect and feedback runaway wherein impacts between objects of sizable mass spall off debris from the force of the collision. The fragments can then hit other objects, producing even more space debris: if a large enough collision or explosion were to occur, such as between a space station and a defunct satellite, or as the result of hostile actions in space, then the resulting debris cascade could make prospects for long-term viability of satellites in low earth orbit extremely low. However, even a catastrophic Kessler scenario at LEO would pose minimal risk for launches continuing past LEO, or satellites travelling at medium Earth orbit (MEO) or geosynchronous orbit (GEO). The catastrophic scenarios predict an increase in the number of collisions per year, as opposed to a physically impassable barrier to space exploration that occurs in higher orbits. Avoidance and reduction Designers of a new vehicle or satellite are frequently required to demonstrate that it can be safely disposed of at the end of its life, for example by use of a controlled atmospheric reentry system or a boost into a graveyard orbit. In order to obtain a license to provide telecommunications services in the United States, the Federal Communications Commission (FCC) requires all geostationary satellites launched after 18 March 2002 to commit to moving to a graveyard orbit at the end of their operational life. U.S. government regulations similarly require for a plan to dispose of satellites after the end of their mission; either through atmospheric re-entry, movement to a storage orbit or direct retrieval. One technology proposed to help deal with fragments from 1 cm to 10 cm in size is the laser broom, a proposed multimegawatt land-based laser that could deorbit debris: the side of the debris hit by the laser would ablate and create a thrust that would change the eccentricity of the remains of the fragment until it would re-enter harmlessly. The Envisat satellite is a large, inactive satellite with a mass of 8,211 kg (18,102 lb) that drifts at 785 km (488 mi), an altitude where the debris environment is the greatest—two catalogued objects can be expected to pass within about 200 meters of Envisat every year—and likely to increase. It could easily become a major debris contributor from a collision during the next 150 years that it will remain in orbit. - In Peter F. Hamilton's 2001 novel Fallen Dragon, the residents of the planet Santa Chico deliberately set off a Kessler syndrome in order to "close the sky" from marauding spaceships, making further trips to or from the planet impossible. - The initial segment of the 2003 manga Planetes is centered around an orbital debris cleaning crew, whose entire purpose is reducing the Kessler risk. - The 2013 film Gravity features a Kessler syndrome catastrophe as the inciting incident of the story, when the Russians shoot down an old satellite. - Neal Stephenson's 2015 novel Seveneves begins with the unexplained explosion of the Moon into seven large pieces, the subsequent creation of a cloud of debris by Kessler syndrome collisions, and the eventual bombardment of Earth's surface by lunar meteoroids. - In Marc Cameron's 2018 novel Tom Clancy: Oath of Office, Iranian dissidents plot to shoot two Russian nuclear missiles into space to destroy an orbiting satellite which would then set off the Kessler Syndrome. - The 2019 video game Ace Combat 7: Skies Unknown features Erusea firing Anti-Satellite Missiles at Osean satellites, only for Osea to follow suit. This inevitably leads to Kessler Syndrome which plunges the continent of Usea into widespread conflict. - "Scientist: Space weapons pose debris threat – CNN". Articles.CNN.com. 2002-05-03. Archived from the original on 2012-09-30. Retrieved 2011-03-17. - "The Danger of Space Junk – 98.07". TheAtlantic.com. Retrieved 2011-03-17. - Donald J. Kessler and Burton G. Cour-Palais (1978). "Collision Frequency of Artificial Satellites: The Creation of a Debris Belt". Journal of Geophysical Research. 83: 2637–2646. Bibcode:1978JGR....83.2637K. doi:10.1029/JA083iA06p02637. - "Lockheed Martin in space junk deal with Australian firm". BBC News. 28 August 2014. Retrieved 2014-08-28. - Carpineti, Alfredo (2016-05-15). "Space Debris Has Chipped One Of The ISS's Windows". I Fucking Love Science. Archived from the original on 2016-05-16. Retrieved 2016-05-16. - Primack, Joel R. (2002). "Debris and Future Space Activities" (PDF). Physics Department, University of California. With enough orbiting debris, pieces will begin to hit other pieces, setting off a chain reaction of destruction that will leave a lethal halo around the Earth. - Joel R. Primack; Nancy Ellen Abrams. "Star Wars Forever? – A Cosmic Perspective" (PDF). the deliberate injection into LEO of large numbers of particles as a cheap but effective anti-satellite measure. - "FCC Enters Orbital Debris Debate". Archived from the original on 2008-05-06. - "FCC Enters Orbital Debris Debate". Archived from the original on 2009-07-24. - "US Government Orbital Debris Standard Practices" (PDF). - Witze, A. (2018-09-05). "The quest to conquer Earth's space junk problem". Nature. 561 (7721): 24–26. doi:10.1038/d41586-018-06170-1. - Daquin, J.; Rosengren, A. J.; Alessi, E. M.; Deleflie, F.; Valsecchi, G. B.; Rossi, A. (2016). "The dynamical structure of the MEO region: long-term stability, chaos, and transport". Celestial Mechanics and Dynamical Astronomy. 124 (4): 335–366. arXiv:1507.06170. doi:10.1007/s10569-015-9665-9. - "NASA Hopes Laser Broom Will Help Clean Up Space Debris". SpaceDaily. Retrieved 2011-03-17. - Gini, Andrea (25 April 2012). "Don Kessler on Envisat and the Kessler Syndrome". Space Safety Magazine. Retrieved 2012-05-09. - Sinha-Roy, Piya (July 20, 2013). "Gravity gets lift at Comic-Con as director Cuaron leaps into space". Reuters. Retrieved 2013-09-05. - Freeman, Daniel (18 May 2015). "Neal Stephenson's Seveneves – A Low-Spoiler "Science" Review". Berkeley Science Review. Retrieved 4 August 2015. - Kessler, D (2009). "The Kessler Syndrome (As Discussed by Donald J. Kessler)". Archived from the original on 2010-05-27. Retrieved 2010-05-26. - An article in the July 2009 issue of Popular Mechanics by Glenn Harlan Reynolds discusses the Kessler syndrome in regards to the February 2009 satellite collision and how international law may need to address the problem to help prevent future incidents: Reynolds, G. H. (2009, July). "Collision course". Popular Mechanics, pp. 50–52. - Kessler was featured in an article named, "The Looming Space Junk Crisis: It’s Time to Take Out the Trash," which appeared in the June 2010 issue of Wired magazine. - Documentary: Collision point: The race to clean up space (length: 22 minutes 28 seconds), included in the extra material on the Blu-ray Disc for Gravity (film). - Orbiting Satellites in real time - Don Kessler's Web Page - NASA Astronomy Picture of the Day: Satellites Collide in Low Earth Orbit (18 February 2009) - Mathematical Modeling of debris flux - The New York Times: "Orbiting Junk, Once a Nuisance, Is Now a Threat" - Wired: "Houston we have a trash problem" - Schwartz, Evan I. (May 24, 2010). "The Looming Space Junk Crisis: It's Time to Take Out the Trash". Wired. Retrieved 14 June 2010. - "Debris Spews Into Space After Satellites Collide" - Aggregated public information research on space debris, graveyard orbits, etc. - "Space junk littering orbit; might need cleaning up", Associated Press (1 September 2011)
Imagine Earth without an atmosphere - without clouds, wind or air. Earth's atmosphere protects, transports, and reacts to life on Earth. Without our ozone layer, the surface of Earth would be subject to harsh radiation coming from the sun. Without good quality air, public health and ecosystems suffer. And changes in the makeup of the atmosphere - such as to carbon dioxide, methane, nitrous oxide, cloud cover, water vapor and aerosols - all contribute to climate change. Five years ago, NASA launched a satellite into space to study changes in our life-sustaining atmosphere. Named after the Latin word for breeze, Aura orbits our planet round the clock, using four instruments to monitor the composition and dynamics of our atmosphere. Take a look at some of its greatest findings so far. On ozone watch Where the ozone goes, everyone knows. This is thanks, in part, to two instruments onboard Aura that track ozone in our atmosphere: the Ozone Monitoring Instrument (OMI) and the Microwave Limb Sounder (MLS). The ozone layer protects life on Earth by absorbing ultraviolet radiation from the sun. In the 1980s, scientists noticed that at the start of spring in the Southern hemisphere, a region of heavily-depleted ozone was appearing over the South Pole and Antarctica - this is now known as the "ozone hole," even though it's not strictly a hole. We have since discovered that chlorofluorocarbons (CFCs) used in refrigerators, air conditioners and aerosol cans in years gone by, as well as other chlorine- and bromine-containing compounds, are to blame for hacking away at the ozone layer - most notably over Antarctica, but also over the rest of the world. Armed with OMI and MLS data, NASA is helping to keep up a 24/7 ozone hole watch. The information collected provides clues as to exactly how chlorine-based compounds break down ozone and how long it will take for the ozone layer to recover. While the ozone layer - which is found in the stratosphere (10 to 50 km, or 6 to 31 miles, up) - is good for us, ozone in the troposphere - the lowest layer of the atmosphere (7 to 20 km, or 4 to 23 miles, up) - is generally bad. This type of ozone is mostly man-made and is an end product of air pollution from internal combustion engines and power plants. It is a major component of smog and tends to peak during summertime when temperatures are highest. In 2006, OMI and MLS produced the first global tropospheric (lower-atmosphere) ozone maps, which showed summertime pollution streaming from the U.S., Europe and China and being produced by the burning of wood and other biomass in Earth's equatorial zone. Jumping new hurdles NASA scientists have a tendency to come up with weird and wacky acronyms, and the four instruments onboard Aura are no exception. Aura's HIRDLS (High-Resolution Dynamics Limb Sounder) mission measures the temperature of the stratosphere and detects ozone, water vapor, other trace gases and aerosols at these altitudes. One of its key characteristics is its ability to resolve small changes in the atmosphere (on the scale of 1 km or 0.6 miles) with high precision. By analzying vertical variations in temperature, HIRDLS has been able to identify and study atmospheric gravity waves in unprecedented detail. Gravity waves, which are like waves on the surface of the ocean, are produced in fluids - in this case in Earth's atmosphere - and are important for transferring momentum from the lower atmosphere to the upper atmosphere. They are generally small-scale and had not been systematically observed until HIRDLS came along. HIRDLS has picked up atmospheric gravity waves with short vertical (about 4-km or 2.4-mile) and horizontal (about 500-km or 300-mile) wavelengths that cannot be picked up by other techniques. The data are being used to improve our understanding of what drives winds in the atmosphere and how it circulates, which is crucial for predicting not only the weather, but also future climate change. The acid test Atmospheric scientists are interested in tracking sulfur dioxide for a couple of reasons: it poses a risk to public health and it can also affect Earth's climate. When sulfur dioxide reacts with water vapor, it creates sulfate ions - the precursors to sulfuric acid - which are highly reflective. Powerful volcanic eruptions can inject sulfate aerosols into the stratosphere, beyond the reach of cleansing rainfall. There, the sulfates can linger for months or years, cooling the climate by reflecting incoming sunlight. In June 2009, Sarychev Peak Volcano on Matua Island in the northwest Pacific erupted. OMI was there to track the intense sulfur dioxide emissions, which stretched westward from the volcano as far as Sakhalin Island and mainland Russia and eastward as far as Alaska. Data suggest that the volcanic plume reached altitudes of 10 to 15 km (6 to 9 miles), and perhaps as high as 21 km (13 miles). Over the past five years, OMI has also tracked sulfur dioxide clouds produced by the eruptions of numerous other volcanoes, and these are being used to redirect the flight paths of aircraft to safety. While volcanoes are important, most atmospheric sulfur dioxide comes from man-made activities - the burning of coal and other fossil fuels. Copper smelters in Peru, for example, which separate copper from copper ore (copper sulfide), are some of the biggest sources of sulfur dioxide. Not only a powerful irritant to the respiratory tract, eyes and skin, sulfur dioxide also leads to acid rain. OMI enables scientists to compare the different lifetimes and dispersals of sulfur dioxide plumes from volcanoes and industrial sources and is helping us to better understand their impact on our climate. While carbon dioxide is commonly touted as a strong greenhouse gas because of its ability to trap heat near the surface of Earth, moisture in the atmosphere (water vapor) is, in fact, a more powerful greenhouse gas than carbon dioxide. As the planet heats up, more water is expected to evaporate from oceans, lakes and rivers, increasing the amount of water vapor in the atmosphere. This water vapor can, in turn, trap more heat near Earth’s surface - a process that is known as positive feedback. Scientists think that as temperatures rise, water vapor could have a strong feedback effect on climate change. In 2006, the MLS instrument discovered that the amount of water vapor in the troposphere increased as the amount of ice contained in nearby clouds increased. What's more, cloud ice occurred more often over warm ocean waters than over cold ones. The results suggested that warmer oceans cause stronger heat convection in thunderstorms and cause more water and ice to be lofted up into the upper troposphere. Researchers found a break point at about 80 F (27 C); when the sea surface temperature rises above this, levels of both water vapor and cloud ice sharply increase. The extra water then acts as a greenhouse gas, blocking some of the heat radiation from escaping into space. In addition, the extra ice clouds also affect Earth’s heat balance by trapping heat and reflecting sunlight. What this underscores is that warmer oceans can lead to more trapping of radiation, and ultimately even warmer oceans - a positive feedback. Earth's water is stored in ice and snow, lakes and rivers, the atmosphere and the ocean. The process by which it circulates from the ocean to the atmopshere to the land and back again to the ocean is known as the water cycle. But how can we track exactly where and how water is carried in our atmosphere? One trick is to use isotopes of water to trace the history of different packets of air, which is just what the Tropospheric Emission Spectrometer (TES) onboard Aura has done. Isotopes refer to atoms of a particular chemical element (say, hydrogen) that have different masses; their atomic nuclei contain the same number of protons but different numbers of neutrons. Regular water molecules are made up of two atoms of hydrogen and one of oxygen, and have the chemical formula H2O. In "semi-heavy" water, one of the hydrogen atoms is replaced with a heavy hydrogen isotope called deuterium and is instead labeled HDO. It turns out that when water changes phase (such as from a liquid to a gas), the relative amount of water isotopes also changes. Lighter isotopes evaporate more readily than heavier isotopes, whereas heavier isotopes condense more readily than lighter ones. By monitoring the ratio of "normal" water to semi-heavy water, it becomes possible to distinguish between water vapor that comes directly from evaporating ocean water versus water vapor that has gone through a more circuitous route in the atmosphere. Evidence from TES suggests that over the tropics (near the equator), 20 to 50 percent of rainfall re-evaporates before it reaches the ground. What's more, the water lofted up into the air by thunderstorms over land comes from both evaporation from plants in large forests and evaporation over nearby oceans. The balance between these two different sources is important, because it tells us how vegetation interacts with the climate and helps maintain regional rainfall levels. Provided by NASA, written by Dr. Amber Jenkins Explore further: NASA sees Hurricane Edouard far from US, but creating rough surf
Students will learn about supply, demand, price, competition, and entrepreneurial skills in this lesson. They will put what they learned into action by creating an ice cream stand, to complete with other stands in the classroom. - Explain the concepts of supply and demand. - Explain how shortages and surpluses affect prices. - Explain how prices and quantity of products produced are determined. - Create an ice cream stand which will compete with other ice cream stands in the classroom vying for business and trying to make money. Tell the students that they will learn about the economic principles of supply, demand, price, and competition. They will read two articles describing examples of supply and demand, and the effect of supply and demand on prices. To conclude the lesson, the students will work in groups to create an ice cream stand which will sell sundaes and milkshakes. Since each group will not have enough ingredients to run its stand successfully, they will have to develop a plan to get the supplies they need from other groups. The goal for each group is to make the best sundaes and milkshakes they can at a competitive price which will turn a profit. Thus, keeping costs down is quite important. A group of adults will buy one sundae and one milkshake from each of the competing groups. Groups will be judged by the quality of the ice cream stand and the quality of the sundae and the milkshake. Appendix Graph: This will be used to help students determine how market price is established. Review of the Laws of Supply and Demand: This site reviews the laws of supply and demand. Economics Basics: Demand and Supply: This site uses graphs and charts to explain supply and demand in greater detail. Market Price Definition: This site explains the concept of market price. Empty Seats Make Yankees Cut Some Premium Prices: This site explains why the New York Yankees had trouble selling their premium seats. Morrows Dump Milk To Protest Low Price: This site explains why farmers were dumping milk instead of selling it. EconEdLink Lessons: These are two great EconEdLink lessons that teachers can refer to or use for the concepts of supply and demand. "The Prices are Changing": - "Demand Shifters": You will need the following materials for this lesson: - Play money in small, easily dividable denominations - 2 sets of markers - Construction paper - 6 quarts or half gallons of ice cream - 2 bottles of caramel toppings - 4 pints of milk - 3 blenders for the milkshakes - 1 package of cups which can be divided up as needed - 1 package of bowls which can be divided up as needed - 1 package of spoons which can be divided up as needed - 1 package of napkins which can be divided up as needed - 1 box of straws which can be divided up as needed - 3 ice cream scoopers - 3 bottles of chocolate syrup Begin the lesson by asking the students the following questions: 1. List five items you would like to buy if you had enough money to buy them. 2. List five items you would like to sell if you could find a buyer for these items. 3. How do you think the price for an item is established? 4. How do the producers of various items decide how many of the items to produce? Begin the demand portion of the lesson by asking students how many World Series tickets they would be willing to buy at $20.00 each? How many would they be willing to buy at $50.00 each? At $100.00 each? At $500.00 each? 1. Ask the students what happens to the amount of something they want to buy when its price goes up. [Usually, they will buy less of that product. Feel free to use examples.] 2. Ask the students what happens to quantity demanded as the price rises. [Quantity demanded drops as price increases.] 3. Ask the students to describe the relationship between quantity demanded and price. [Demand for most products is sensitive to price. As prices rises, quantity demanded falls. For most items, in other words, there is an inverse relationship between price and quantity demanded. You may want to plot the data on a demand graph.] 4. Introduce the New York Yankees example. Discuss the difficulty the Yankees have had selling their premium seats in the new stadium (some seats cost several thousand of dollars). Have the students read the article Empty Seats Make Yankees Cut Some Premium Prices . Now it is time to move to the supply portion of the lesson. 1. Ask the students how many of them would be willing to babysit for $1.00 an hour? For $5.00 an hour? For $10.00 an hour? For $20.00 an hour? [You may want to plot the data on a supply graph.] 2. Ask the students what the relationship is between quantity supplied (of babysitting labor, for example) and price. [There is a direct relationship between quantity supplied and price. As price rises, quantity supplied goes up, and vice versa.] 3. Ask the students if it makes sense to sell a product for less than the supplier paid for it. [Usually, this wouldn't make sense the supplier would be losing money.] 4. Have the students read the article Economics Basics: Demand and Supply to review the principles of supply and demand. Now tell the students that they will determine how market price is established. 1. Let's assume that at 40 cents a bottle students demand 100 bottles of soda (pop). At $1.00 a bottle, students, demand 75 bottles. At $2.00 a bottle, students demand 30 bottles. At $5.00 a bottle, students demand 2 bottles. Place this data on the graph in the Appendix. 2. Let's assume that at 40 cents a bottle, suppliers would provide no soda. At $1.00 a bottle, supplies would provide 75 bottles. At $2.00 a bottle, suppliers would provide 250 bottles. At $5.00 a bottle, suppliers would provide 500 bottles. Place this data on the graph in the Appendix. 3. After you connect the dots for demand and supply, the graph will show the market clearing price or the market equilibrium price. This is where quantity demanded equals quantity supplied. The market is cleared at this price. If price is higher than the market price, there will be a surplus in the market. If price is lower than the market price, there will be a shortage. In both instances, the market price will have to move up or down to clear the market. You should show this to the students using the graph in the Appendix. You may also want to revisit the website Economics Basics: Demand and Supply at this time. 4. Have the students visit Morrows Dump Milk To Protest Low Price to learn why farmers are deliberately dumping milk. Discuss why farmers are doing this [There is an oversupply of milk resulting in low milk prices. The low prices mean farmers are supplying milk and losing money. If the market price is too low to cover production costs, some producers will leave the market shifting supply on the graph to the left and raising the market equilibrium price.] Conduct the following activity to help the students understand the main concepts of the lesson. 1. Place the students in groups of four or five. Tell them they will create an ice cream stand. In their ice cream stands they will make sundaes and milkshakes. They will be judged on three points. One point is the attractiveness of their stand. Another is the quality (or taste) of their sundaes. The final point is the quality (the taste) of their milkshakes. A group of four or five adults will independently judge the ice cream stands according to each point. The teacher will pay $15.00 to the team with the best stand, $15.00 to the team with the best sundae (plus the price of the sundae), and $15.00 to the team with the best milkshake (plus the price of the milkshake). Students must set a price for their sundaes and their milkshakes-a price that will allow them to make a profit. Explain that the price of the sundae and the milkshake will be a factor in deciding which one is best. Explain that the cost of making one sundae is 25 cents, plus the cost of anyy supplies the students had to buy; the cost of a making one milkshake is 35 cents, plus costs. 2. Tell the students they will need to develop a plan for obtaining all of the necessary ingredients for making their sundae and milkshake since no group will have all of the necessary ingredients. The students will need to work well as a team, divide jobs, and formulate a plan for obtaining the ingredients they need. 3. The students will have about 30 minutes to design their stand (including menus) and make their sundae and milkshake. Please distribute the following materials to each group: - Group 1 $15.00, milk, bowls, ice cream, chocolate syrup, straws - Group 2 $15.00, milk, caramel, chocolate syrup, spoons, ice cream - Group 3 $15.00, blender, cups, napkins, construction paper, markers, ice cream - Group 4 $15.00, two scoopers, blender, markers, ice cream, milk - Group 5 $15.00, ice cream, bowls, construction paper, blender, one scooper - Group 6 $15.00, milk, caramel, chocolate syrup, ice cream, cups Once the students have established their stands and produced their sundaes and milkshakes, judge each ice cream stand, taste each sundae and milkshake, and award the prize money based on the criteria established. 4. Discuss with students their thoughts and experiences as they went through the activity. Discuss whether they students used money to purchase supplies or whether they traded for the supplies. Ask how much profit they were hoping to make from each sale. Tell the students why you awarded the money as you did. Ask the students to write a brief paragraph explaining how forces of supply and demand determines price. Also ask the students to write about what did or didn't make their groups successful in this activity. Finally, have the students write a paragraph explaining three or four specific ideas they learned from this lesson. Tell the students they now have an idea of how price gets determined and how supply and demand are involved. In running a business such as an ice cream shop, business skills are essential. Developing a business plan is crucial to being successful in the real world. Understanding the forces of supply and demand as well as competition in the marketplace are essential to being successful in the real world. Ask the students why the Cash for Clunkers program has been so successful. Ask them to explain how forces of supply and demand have worked in this program. [Demand for cars was quite low. By providing additional refunds, the program made new cars more affordable. The quantity demanded for cars went up as the clunkers program is a subsidy, acting to shift the demand curve to the right..] Ask what might happen to cars sales once the money for this program ends. [Sales may drop, but if car dealers are creative or if car manufacturers develop popular, fuel efficient cars, sales may not drop as much.] Ask the students if there are some items which people would buy no matter what the items cost. [Possible examples: Water, electricity, and petroleum for our cars would be some examples. Tell students that, for these items, demand is inelastic. This means that price doesn't factor into purchasing decisions very much, since the items in question are very important us. Point out that demand may be more inelastic in the short run than in the long run. For example, in the short run, high prices may not cause people to buy less gasoline but if high prices persist for a long time people may closer to work, buy a more fuel-efficient car, or use mass transit.] “I like the lesson. A good idea would be to first teach the sections on supply and demand.Then, I would have students visit the Econ. Basics site.I would conclude with the ice cream simulation.” “Where is the Appendix C? EconEdLink: For each Appendix click the "Appendix" link under resources or within the process. Thank you for your comment.” “Thank you for this article! As a business student myself I understand how important comprehending supply and demand is.” “I did this lesson with a group of students and they loved it. Since it was a group of students during summer camp, I modified the lesson and just focused on them doing the actual ice cream stand portion. We did touch on the vocabulary and some other aspects of the lesson beforehand. The students loved the activity and their comments afterward were insightful. I am at the camp again, the students enjoyed it so much last year that they have requested we do it again.” Review from EconEdReviews.org “ I appreciate all the time it took to compile this lesson. The links for research are great and the lesson is well organized. The activity is very engaging and students will definitely get a "taste" of the real-world competition in producing a product. I do not know what the timeline for this lesson is, but I do know it would take my class several sessions to complete the research that is required to understand the concepts as they are used in the lesson. This may be a limiting factor to the lesson. I believe I would begin the lesson with the article on the farmers dumping their milk to illustrate what can happen in a market where supply does not meet demand and prices fall. I would then proceed with the flow of the lesson. I do not see any place in the lesson where the students create a business plan and research the costs and setting a price for their product. Perhaps some organizer to complete that could be helpful. I also think that a rubric to use to judge the stands could make the competition equitable. This not only would provide the judges a standard but the kids as well. I personally would do more journaling along the way of the lesson. I would connect the concepts more to their world and let them reflect on how supply and demand and prices affect them as consumers. Perhaps CD sales or concert tickets instead of looking at World Series tickets? (I realize the link was used for World Series tickets because it was well connected.) Overall I liked the lesson. I would use the lesson in parts rather than as a whole. There is so much material to cover in one shot. ”
As we discussed in Lab 5, flowing water is a very important mechanism for erosion, transportation and deposition of sediments. Water flow in a stream is primarily related to the stream’s , but it is also controlled by the geometry of the . As shown in Figure 8.1.1, water flow velocity is decreased by friction along the stream bed, so it is slowest at the bottom and edges and fastest near the surface and in the middle. In fact, the velocity just below the surface is typically a little higher than right at the surface because of friction between the water and the air. On a curved section of a stream, flow is fastest on the outside and slowest on the inside. Other factors that affect stream-water velocity are the size of sediments on the stream bed—because large particles tend to slow the flow more than small ones—and the , or volume of water passing a point in a unit of time (e.g., cubic metres (m3) per second). During a flood, the water level always rises, so there is more cross-sectional area for the water to flow in, however, as long as a river remains confined to its channel, the velocity of the water flow also increases. Figure 8.1.2 shows the nature of sediment transportation in a stream. Large particles rest on the bottom——and may only be moved during rapid flows under flood conditions. They can be moved by (bouncing) and by (being pushed along by the force of the flow). Smaller particles may rest on the bottom some of the time, where they can be moved by saltation and traction, but they can also be held in suspension in the flowing water, especially at higher velocities. As you know from intuition and from experience, streams that flow fast tend to be turbulent (flow paths are chaotic and the water surface appears rough) and the water may be muddy, while those that flow more slowly tend to have laminar flow (straight-line flow and a smooth water surface) and clear water. Turbulent flow is more effective than laminar flow at keeping sediments in suspension. Stream water also has a dissolved load, which represents (on average) about 15% of the mass of material transported, and includes ions such as calcium (Ca+2) and chloride (Cl−) in solution. The solubility of these ions is not affected by flow velocity. The faster the water is flowing, the larger the particles that can be kept in suspension and transported within the flowing water. However, as Swedish geographer Filip Hjulström discovered in the 1940s, the relationship between grain size and the likelihood of a grain being eroded, transported, or deposited is not as simple as one might imagine (Figure 8.1.3). Consider, for example, a 1 millimetre grain of sand. If it is resting on the bottom, it will remain there until the velocity is high enough to erode it, around 20 centimetres per second (cm/s). But once it is in suspension, that same 1 mm particle will remain in suspension as long as the velocity doesn’t drop below 10 cm/s. For a 10 mm gravel grain, the velocity is 105 cm/s to be eroded from the bed but only 80 cm/s to remain in suspension. On the other hand, a 0.01 mm silt particle only needs a velocity of 0.1 centimetres per second (cm/s) to remain in suspension, but requires 60 cm/s to be eroded. In other words, a tiny silt grain requires a greater velocity to be eroded than a grain of sand that is 100 times larger! For clay-sized particles, the discrepancy is even greater. In a stream, the most easily eroded particles are small sand grains between 0.2 mm and 0.5 mm. Anything smaller or larger requires a higher water velocity to be eroded and entrained in the flow. The main reason for this is that small particles, and especially the tiny grains of clay, have a strong tendency to stick together, and so are difficult to erode from the stream bed. It is important to be aware that a stream can both erode and deposit sediments at the same time. At 100 cm/s, for example, silt, sand, and medium gravel will be eroded from the stream bed and transported in suspension, coarse gravel will be held in suspension, pebbles will be both transported and deposited, and cobbles and boulders will remain stationary on the stream bed. Refer to the Hjulström-Sundborg diagram (Figure 8.1.3) to answer these questions. - A fine sand grain (0.1 millimetres) is resting on the bottom of a stream bed. - What stream velocity will it take to get that sand grain into suspension? - Once the particle is in suspension, the velocity starts to drop. At what velocity will it finally come back to rest on the stream bed? - A stream is flowing at 10 centimetres per second (which means it takes 10 seconds to go 1 metres, and that’s pretty slow). - What size of particles can be eroded at 10 centimetres per second? - What is the largest particle that, once already in suspension, will remain in suspension at 10 centimetres per second? See Appendix 2 for Practice Exercise 8.1 answers. A stream typically reaches its greatest velocity when it is close to flooding over its banks. This is known as the bank-full stage, as shown in Figure 8.1.4. As soon as the flooding stream overtops its banks and occupies the wide area of its flood plain, the water has a much larger area to flow through and the velocity drops significantly. At this point, sediment that was being carried by the high-velocity water is deposited near the edge of the channel, forming a natural bank or . Figure 8.1.1 image description: When a stream curves, the flow of water is fastest on the outside of the curve and slowest on the inside of the curve. When the stream is straight and a uniform depth, the stream flows fastest in the middle near the top and slowest along the edges. When the depth is not uniform, the stream flows fastest in the deeper section. [Return to Figure 8.1.1] - Erosion velocity curve: A 0.001 millimetre particle would erode at a flow velocity of 500 centimetres per second or greater. As the particle size gets larger, the minimum flow velocity needed to erode the particle decreases, with the lowest flow velocity being 30 centimetres per second to erode a 0.5 millimetre particle. To erode particles larger than 0.5 millimetres, the minimum flow velocity rises again. - Settling velocity curve: A 0.01 millimetre particle would be deposited with a flow velocity of 0.1 centimetre per second or less. As the flow velocity increases, only larger and larger particles will be deposited. - Particles between these two curves (either moving too slow or being too small to be eroded or deposited) will be transported in the stream. - Figure 8.1.1, 8.1.2, 8.1.3, 8.1.4: © Steven Earle. CC BY. the slope of a stream bed over a specific distance, typically expressed in m per km The physical boundaries of a stream (or a river), consisting of a bed and stream (or river) banks. the volume of water flow in a stream expressed in terms of volume per unit time (e.g., m3/s) the fraction of a stream’s sediment load that typically rests on the bottom and is moved by saltation and traction The vast majority of the minerals that make up the rocks of Earth's crust are silicate minerals. These include minerals such as quartz, feldspar, mica, amphibole, pyroxene, olivine, and a variety of clay minerals. The building block of all of these minerals is the , a combination of four oxygen atoms and one silicon atom. As we've seen, it's called a tetrahedron because planes drawn through the oxygen atoms form a shape with 4 surfaces (Figure 2.2.4). Since the silicon ion has a charge of 4 and each of the four oxygen ions has a charge of −2, the silica tetrahedron has a net charge of −4. In silicate minerals, these tetrahedra are arranged and linked together in a variety of ways, from single units to complex frameworks (Table 2.6). The simplest silicate structure, that of the mineral olivine, is composed of isolated tetrahedra bonded to iron and/or magnesium ions. In olivine, the −4 charge of each silica tetrahedron is balanced by two (i.e., +2) iron or magnesium cations. Olivine can be either Mg2SiO4 or Fe2SiO4, or some combination of the two (Mg,Fe)2SiO4. The divalent cations of magnesium and iron are quite close in radius (0.73 versus 0.62 angstroms). Because of this size similarity, and because they are both divalent cations (both can have a charge of +2), iron and magnesium can readily substitute for each other in olivine and in many other minerals. |Tetrahedron Configuration Picture||Tetrahedron Configuration Name||Example Minerals| |Isolated (nesosilicates)||Olivine, garnet, zircon, kyanite| |Pairs (sorosilicates)||Epidote, zoisite| |Single chains (inosilicates)||Pyroxenes, wollastonite| |Double chains (inosilicates)||Amphiboles| |Sheets (phyllosilicates)||Micas, clay minerals, serpentine, chlorite| |3-dimensional structure||Framework (tectosilicates)||Feldspars, quartz, zeolite| Cut around the outside of the shape (solid lines and dotted lines), and then fold along the solid lines to form a tetrahedron. If you have glue or tape, secure the tabs to the tetrahedron to hold it together. If you don’t have glue or tape, make a slice along the thin grey line and insert the pointed tab into the slit. If you are doing this in a classroom, try joining your tetrahedron with others into pairs, rings, single and double chains, sheets, and even three-dimensional frameworks. See Appendix 3 for Exercise 2.3 answers. In olivine, unlike most other silicate minerals, the silica tetrahedra are not bonded to each other. Instead they are bonded to the iron and/or magnesium ions, in the configuration shown on Figure 2.4.1. As already noted, the 2 ions of iron and magnesium are similar in size (although not quite the same). This allows them to substitute for each other in some silicate minerals. In fact, the ions that are common in silicate minerals have a wide range of sizes, as depicted in Figure 2.4.2. All of the ions shown are cations, except for oxygen. Note that iron can exist as both a +2 ion (if it loses two electrons during ionization) or a +3 ion (if it loses three). Fe2+ is known as ferrous iron. Fe3+ is known as iron. Ionic radii are critical to the composition of silicate minerals, so we’ll be referring to this diagram again. The structure of the single-chain silicate pyroxene is shown on Figures 2.4.3 and 2.4.4. In , silica tetrahedra are linked together in a single chain, where one oxygen ion from each tetrahedron is shared with the adjacent tetrahedron, hence there are fewer oxygens in the structure. The result is that the oxygen-to-silicon ratio is lower than in olivine (3:1 instead of 4:1), and the net charge per silicon atom is less (−2 instead of −4). Therefore, fewer cations are necessary to balance that charge. Pyroxene compositions are of the type MgSiO3, FeSiO3, and CaSiO3, or some combination of these. Pyroxene can also be written as (Mg,Fe,Ca)SiO3, where the elements in the brackets can be present in any proportion. In other words, pyroxene has one cation for each silica tetrahedron (e.g., MgSiO3) while olivine has two (e.g., Mg2SiO4). Because each silicon ion is +4 and each oxygen ion is −2, the three oxygens (−6) and the one silicon (+4) give a net charge of −2 for the single chain of silica tetrahedra. In pyroxene, the one divalent cation (2) per tetrahedron balances that −2 charge. In olivine, it takes two divalent cations to balance the −4 charge of an isolated tetrahedron.The structure of pyroxene is more “permissive” than that of olivine—meaning that cations with a wider range of ionic radii can fit into it. That’s why pyroxenes can have iron (radius 0.63 Å) or magnesium (radius 0.72 Å) or calcium (radius 1.00 Å) cations (see Figure 2.4.2 above). The diagram below represents a single chain in a silicate mineral. Count the number of tetrahedra versus the number of oxygen ions (yellow spheres). Each tetrahedron has one silicon ion so this should give you the ratio of Si to O in single-chain silicates (e.g., pyroxene). The diagram below represents a double chain in a silicate mineral. Again, count the number of tetrahedra versus the number of oxygen ions. This should give you the ratio of Si to O in double-chain silicates (e.g., amphibole). See Appendix 3 for Exercise 2.4 answers. In structures, the silica tetrahedra are linked in a double chain that has an oxygen-to-silicon ratio lower than that of pyroxene, and hence still fewer cations are necessary to balance the charge. Amphibole is even more permissive than pyroxene and its compositions can be very complex. Hornblende, for example, can include sodium, potassium, calcium, magnesium, iron, aluminum, silicon, oxygen, fluorine, and the hydroxyl ion (OH−). In structures, the silica tetrahedra are arranged in continuous sheets, where each tetrahedron shares three oxygen anions with adjacent tetrahedra. There is even more sharing of oxygens between adjacent tetrahedra and hence fewer cations are needed to balance the charge of the silica-tetrahedra structure in sheet silicate minerals. Bonding between sheets is relatively weak, and this accounts for the well-developed one-directional cleavage in micas (Figure 2.4.5). Biotite mica can have iron and/or magnesium in it and that makes it a silicate mineral (like olivine, pyroxene, and amphibole). Chlorite is another similar mineral that commonly includes magnesium. In mica, the only cations present are aluminum and potassium; hence it is a non-ferromagnesian silicate mineral. Apart from muscovite, biotite, and chlorite, there are many other sheet silicates (a.k.a. ), many of which exist as clay-sized fragments (i.e., less than 0.004 millimetres). These include the clay minerals kaolinite, , and , and although they are difficult to study because of their very small size, they are extremely important components of rocks and especially of soils. All of the sheet silicate minerals also have water molecules within their structure. Silica tetrahedra are bonded in three-dimensional frameworks in both the and . These are non-ferromagnesian minerals—they don't contain any iron or magnesium. In addition to silica tetrahedra, feldspars include the cations aluminum, potassium, sodium, and calcium in various combinations. Quartz contains only silica tetrahedra. The three main minerals are potassium feldspar, (a.k.a. K-feldspar or K-spar) and two types of plagioclase feldspar: albite (sodium only) and anorthite (calcium only). As is the case for iron and magnesium in olivine, there is a continuous range of compositions (solid solution series) between albite and anorthite in plagioclase. Because the calcium and sodium ions are almost identical in size (1.00 Å versus 0.99 Å) any intermediate compositions between CaAl2Si3O8 and NaAlSi3O8 can exist (Figure 2.4.6). This is a little bit surprising because, although they are very similar in size, calcium and sodium ions don’t have the same charge (Ca2+ versus Na+ ). This problem is accounted for by the corresponding substitution of Al+3 for Si+4 . Therefore, albite is NaAlSi3O8 (1 Al and 3 Si) while anorthite is CaAl2Si2O8 (2 Al and 2 Si), and plagioclase feldspars of intermediate composition have intermediate proportions of Al and Si. This is called a “coupled-substitution.” The intermediate-composition plagioclase feldspars are oligoclase (10% to 30% Ca), andesine (30% to 50% Ca), labradorite (50% to 70% Ca), and bytownite (70% to 90% Ca). K-feldspar (KAlSi3O8) has a slightly different structure than that of plagioclase, owing to the larger size of the potassium ion (1.37 Å) and because of this large size, potassium and sodium do not readily substitute for each other, except at high temperatures. These high-temperature feldspars are likely to be found only in volcanic rocks because intrusive igneous rocks cool slowly enough to low temperatures for the feldspars to change into one of the lower-temperature forms. In (SiO2), the silica tetrahedra are bonded in a “perfect” three-dimensional framework. Each tetrahedron is bonded to four other tetrahedra (with an oxygen shared at every corner of each tetrahedron), and as a result, the ratio of silicon to oxygen is 1:2. Since the one silicon cation has a +4 charge and the two oxygen anions each have a −2 charge, the charge is balanced. There is no need for aluminum or any of the other cations such as sodium or potassium. The hardness and lack of cleavage in quartz result from the strong covalent/ionic bonds characteristic of the silica tetrahedron. Silicate minerals are classified as being either ferromagnesian or non-ferromagnesian depending on whether or not they have iron (Fe) and/or magnesium (Mg) in their formula. A number of minerals and their formulas are listed below. For each one, indicate whether or not it is a ferromagnesian silicate. See Appendix 3 for Exercise 2.5 answers.*Some of the formulas, especially the more complicated ones, have been simplified. along a stream, the ridge that naturally forms along the edge of the channel during flood events
Birthdays and the Binary System: A Magical Mixture Young scholars observe, describe, and analyze numerical patterns. They learn to understand the binary system and play an enjoyable activity with peers using their own birthdays. 5th - 8th Math 4 Views 12 Downloads Introduction to Systems of Linear Equations Here is a lesson that really delivers! Middle schoolers collaborate to consider pizza prices from four different pizza parlors. Using systems of simultaneous equations, they graph each scenario to determine the best value. Developed for... 7th - 9th Math CCSS: Designed Give Binary a Try! Students apply binary code in software applications for computer engineers. In this binary code lesson, students read about binary code and its applications to computer engineers. They download software and read an online binary clock.... 3rd - 12th Technology & Engineering CCSS: Designed Mayan Mathematics and Architecture Take young scholars on a trip through history with this unit on the mathematics and architecture of the Mayan civilization. Starting with a introduction to their base twenty number system and the symbols they used, this eight-lesson unit... 4th - 8th Math CCSS: Adaptable New Review Statistics-Investigate Patterns of Association in Bivariate Data Young mathematicians construct and analyze patterns of association in bivariate data using scatter plots and linear models. The sixth chapter of a 10-part eighth grade workbook series then prompts class members to construct and interpret... 7th - 10th Math CCSS: Designed Round and Round We Go — Exploring Orbits in the Solar System Math and science come together in this cross-curricular astronomy lesson plan on planetary motion. Starting off with a hands-on activity that engages the class in exploring the geometry of circles and ellipses, this lesson plan then... 5th - 8th Math CCSS: Adaptable New Review Classifying Solutions to Systems of Equations Double the fun of solving a linear equation with an activity that asks learners to first compete an assessment task graphing two linear equations to find the common solution. They then complete a card activity solving systems of... 8th Math CCSS: Designed What Members Say - Kristen D., 1st year teacher - Houston, TX
The trapezium is a polygon with four sides, i.e. it is a quadrilateral. A trapezium is a special quadrilateral with only one pair of parallel sides. The trapezium is a two-dimensional shape that appears as a table. Trapezium originated from the Greek word “trapeze” which means table. What is a Trapezium? A trapezium is a closed-shaped two-dimensional quadrilateral having a pair of parallel opposite sides. The parallel sides of a trapezium are called bases and the non-parallel sides of a trapezium are called legs. The trapezium has four sides and four corners. A parallelogram is also called a trapezoid with two parallel sides. In the above figure, a and b are the bases of the trapezium and h is the height of the trapezium. Types of Trapezium Based on the sides and the angles, the trapezium is of three types: - Scalene Trapezium - Isosceles Trapezium - Right Trapezium The trapezium which has an equal length of legs is called an isosceles trapezium, i.e. in an Isosceles Trapezium, the two non-parallel sides are equal. A trapezium with all the sides not equal is called a scalene trapezium. In a scalene trapezium, no two angles are equal. A trapezium that has a right angle pair, adjacent to each other is known as a right trapezium. A trapezium has one pair of parallel sides and the other two sides are non-parallel. In a regular trapezium, the other two non-parallel sides are equal, but in the case of an irregular trapezium the two non-parallel opposite sides, are unequal. Properties of Trapezium - In trapezium, bases are parallel to each other. Example – The sides AB and CD are parallel to each other, shown in the figure. - The adjacent interior angles in a trapezium sum up to 180°. Example – There are two pairs of co-interior angles. One pair is ∠ A and ∠ D whereas the other pair is ∠ B and ∠ C. The sum of each pair of co-interior angles is 180°. - The sum of all the interior angles in a trapezium is always 360°. Example – In the figure ∠A+∠D is 180° and ∠B+∠C is 180°. Therefore ∠A+∠D +∠B+∠C = 360°. - The length of both diagonals in the trapezium is equal. - In the trapezium, both diagonals intersect each other. - Trapezium has exactly one pair of opposite sides that are parallel. The important formulas of a trapezium are: - Area of Trapezium = ½ (Sum of parallel sides) × (Distance between parallel sides) - Perimeter of Trapezium = Sum of all four sides Area of Trapezium Trapezium has two parallel sides a, and b units respectively, and its altitude is h. Now the area of the trapezium can be calculated by finding the average of bases and multiplying its result by the altitude. Area of trapezium = ((a +b)/2) × h a and b are the bases of trapezium h is the altitude Area of Isosceles Trapezium Let a and b be the length of parallel sides of a trapezium ABCD, where a and b are the bases of the trapezium and a>b. Now, as it is an Isosceles Trapezium c is the length of both the two non-parallel sides and h is the height of the trapezium. Npw, AB = a, CD = b, BC = AD = c In the right triangle, AED Length of perpendicular, h = √(c2 – (a-b)2) [using Pythagoras theorem]….(1) Area = ½ × sum of parallel sides × height of trapezium = ½ × (a+b) × h using equation (1) Area of Isosceles Trapezium = 1/2 × [√(c2 – (a-b)2) (a+b)] Perimeter of Trapezium The perimeter of a trapezium is given by calculating the sum of all its sides. Perimeter of trapezium = AB + BC + CD + AD AB, BC, CD and AD are the sides of the trapezium. Perimeter of Isosceles Trapezium If in an Isosceles trapezium a and b are the lengths of parallel sides i.e. the bases and c is the length of two equal non-parallel sides, then the perimeter is given by: Perimeter = a+b+2c a, b are bases c is the equal side Difference between Trapezium and Trapezoid In general terms, both Trapezium and Trapezoid are the same but the difference lies in their country of origin. The trapezium is of British origin, it is a four-sided polygon and a two-dimensional figure it has exactly one pair of parallel sides opposite to each other. In India, we follow British English hence, the word Trapezium is used. The trapezoid is of American origin, it is also a four-sided polygon with one pair of parallel sides opposite to each other. Parallel sides are the bases and another two non-parallel sides are called the legs of the trapezoid. Solved Examples on Trapezium Example 1: Find the fourth side of the trapezium, if the other three sides are 8 cm, 12 cm, and 16 cm, and the perimeter is 40 cm. Perimeter is given as the sum of all its sides. Let the length o unknown be ‘x’ units. Perimeter = 40 40 = 8 + 12 + 16 + x x = 40 – (8 + 12 + 16) = 4 cm Thus, the length of the unknown side is 4 cm Example 2: A trapezium has parallel sides of lengths 15 cm and 11 cm, and non-parallel sides of length 5 cm each. Calculate the perimeter of the trapezium. It is a Isosceles Trapezium because it is clearly mentioned that non parallel sides of length 5 cm each are equal. According to the isosceles trapezium if two non-parallel sides of the trapezium are of equal length then it is known as isosceles trapezium. Given, a=15 cm, b=11 cm and c= 5 cm Perimeter = a+b+2c P = 15+11+2(5) P = 15+11+10 P = 36 cm Example 3: Find the perimeter of a trapezium whose sides are 12cm, 14cm, 16cm, and 18cm. As we know that the perimeter of a trapezium is given by calculating the sum of all its sides. P = Sum of all the sides P = 12 + 14 + 16 + 18 P = 60 cm Hence, the perimeter of trapezium is 60 cm. Example 4: Find the area of the Trapezium, in which the sum of the parallel sides is 60 cm, and its height is 10 cm. Given, the sum of parallel sides 60 cm and the height, h =10 cm We know that, Area of a trapezium, A = 1/2 × Sum of parallel sides × distance between the parallel sides By substituting the given values, A = 30×10 A = 300 cm2. Therefore, the area of Trapezium =300 cm2. FAQs on Trapezium Question 1: How many parallel sides do a Trapezium have? We know that trapezium, is a quadrilateral with one pair of parallel sides. Thus, a trapezium has a pair of parallel lines(sides). Question 2: Can a trapezium be considered a Quadrilateral? A has four sides, four vertices, and four angles. Hence it can be considered a quadrilateral, sum of all four interior angles of a trapezium is 360 degrees. Question 3: Can a square be called a trapezium? A trapezium is a quadrilateral with only one pair of parallel sides and the other two sides are non-parallel. But in the case of a Square, it has two pairs of parallel sides hence, it can not be considered a Trapezium. Question 4: Are the diagonals of a trapezium always equal? The diagonals of a trapezium may not be equal. In the case of a Regular polygon, the diagonals are equal but this is not true in the case of an Irregular Polygon. Question 5. What are the properties of a trapezium? The 5 properties of a Trapezium are: - In trapezium, bases are parallel to each other. - A trapezium has supplementary adjacent angles. - Only one pair of opposite sides are parallel. - The sum of all the interior angles in a trapezium is always 360°. - The line that joins the mid-point of the non-parallel sides is always parallel to the bases. Please Login to comment...
What Is Permanent Settlement? The Permanent Settlement of Land Revenue, also known as the Zamindari system, was introduced in British India in 1793 by Lord Cornwallis. It was primarily implemented in the Bengal, Bihar, and Orissa regions, and later extended to Varanasi and Madras. The Permanent Settlement covered approximately 19% of British India. Under this system, a new class of landowners called Zamindars was created. This was an altogether new type of system and was never practiced by previous rulers e.g. Mughals, Marathas, Sikhs, Mysore or Awadh etc. As the British rule expanded in India two other types of revenue systems namely – Mahalwari System (Nothern and Central India) and Ryotwari System (Western and Southern India) were also initiated. These zamindars were responsible for collecting land revenue from farmers through intermediaries and paying a portion of it to the British East India Company which was 10/11th of the collection. In return, the zamindars were granted the right to collect the remaining portion of the revenue for their own use and for performing their duties as intermediaries. The Permanent Settlement was also known by the names Istamrari, Jagirdari, Malguzari, Bishvedari and Zamindari. Permanent Settlement was designed to provide the British East India Company with a predictable and stable source of revenue from land taxes, while also establishing the rights of Bengali landlords over their land. Under the permanent settlement, the British divided the land in Bengal into three categories: Zamindari, Raiyati, And Khudkasht. - Zamindari land was held by zamindars, or large landowners, who were responsible for collecting taxes from the tenants on their land. These zamindars were granted permanent rights to collect revenue from the land, as long as they paid the agreed-upon taxes to the British. - Raiyati land was held by raiyats, or small and medium-sized landowners, who were responsible for cultivating their own land and paying taxes directly to the British. - Khudkasht land was land that was personally cultivated by the owner and not rented out to tenants. The permanent settlement had a number of consequences for Bengal. One of the most significant was the creation of a new class of wealthy zamindars, who controlled vast tracts of land and had the power to collect taxes from tenants. This led to the concentration of wealth and land in the hands of a few wealthy landowners, while the majority of peasants remained poor and landless. The permanent settlement also had a negative impact on agricultural production in Bengal. The zamindars had little incentive to invest in improving their land, as they were not required to pay higher taxes based on increased productivity. This led to a stagnation in agricultural production and a decline in the overall prosperity of the region. The permanent settlement was also criticized for its lack of flexibility. It did not account for fluctuations in the price of crops or changes in the population, which could significantly impact the amount of revenue collected by the British. Overall, the permanent settlement had significant and lasting consequences for the people of Bengal. It entrenched the power of a small group of wealthy landowners, while the majority of peasants remained poor and disadvantaged. Permanent Settlement Was Introduced By The Permanent Settlement of Land Revenue, also known as the Zamindari system, was introduced in British India in 1793 by Lord Cornwallis under the Cornwallis Code. The Cornwallis Code refers to a set of laws and regulations implemented in British India by Lord Cornwallis, who served as the Governor-General of India from 1786 to 1793. These laws were designed to reform and modernize the administration of the East India Company in India. One of the most significant parts of the Cornwallis Code was the Permanent Settlement of Land Revenue, which established a new system for collecting land taxes and granted permanent rights to collect revenue to a class of intermediaries called zamindars. The Cornwallis Code also included the Charter Act of 1793, which granted the East India Company a monopoly on trade with India and established a system for the appointment and promotion of civil servants. Other notable aspects of the Cornwallis Code included: - The introduction of a professional and merit-based system for the recruitment of civil servants, - The establishment of a central treasury and a system of standardized accounting, and the codification of criminal and civil law. - Overall, the Cornwallis Code had a significant impact on the administration and legal system of British India and laid the foundations for the modern Indian state. Describe The Main Features Of The Permanent Settlement The Permanent Settlement of Land Revenue was a land revenue system introduced in British India in 1793. - It recognized landlords, known as zamindars, as landowners and granted them hereditary succession rights to the lands under their control. - The zamindars were free to sell or transfer the land as they saw fit, as long as they paid the government a fixed revenue on the specified date. - If they failed to do so, their rights would be terminated and the land would be auctioned off. - Under the Permanent Settlement, the zamindars were required to pay a set amount of tax, which was fixed at a rate of 10% for the government and 10% for the zamindar. - This tax rate was significantly higher than the current rates in England. - The zamindars were also required to issue written agreements, known as pattas, to each cultivator outlining how much the tenant was required to pay in rent. Drawbacks of Permanent Settlement However, the Permanent Settlement had several drawbacks: - The fixed revenue rates were high, leaving many zamindars with little or no margin for shortfalls in times of natural disasters or other calamities. - As a result, many zamindars were forced to divide their estates into small lots of land known as patni and rent them out permanently to holders on the promise of a fixed rent. - This process, known as subinfeudation, further exacerbated the exploitation and oppression experienced by cultivators, who were frequently forced to take out loans to pay their rents and risked eviction if they failed to do so. - The Permanent Settlement also did not allow for the tax rate to be increased, which meant that revenue could not increase to meet the growing expenses of the East India Company. - It was disastrous for the zamindars as well since the rate was fixed and if the zamindar could not pay due to crop failure, he was evicted. - In some ways it was not good for the Company as well. When the cultivation grew and zamindars were making more money, the company could not increase the rent as the rate was fixed. How was the Mahalwari system different from the Permanent Settlement The Mahalwari system was a land revenue system that was implemented in parts of India, including the North-Western Provinces and the Punjab, during the British colonial period. It was introduced in 1822 to replace the zamindari system, which had been in place since the early 18th century. Under the Mahalwari system, the revenue from the land was collected directly from the cultivators, rather than from intermediaries such as zamindars. The Permanent Settlement, on the other hand, was a land revenue system that was implemented in the Bengal Presidency of British India in 1793. It was introduced by Lord Cornwallis and was intended to provide a stable and predictable source of revenue for the British East India Company. Under the Permanent Settlement, the zamindars were made the owners of the land, and they were required to pay a fixed amount of revenue to the British government. This system was seen as more favorable to the zamindars, as they were able to keep the excess revenue from their lands and pass on the burden of taxation to the cultivators. Overall, the Mahalwari system was seen as more equitable than the Permanent Settlement, as it sought to collect revenue directly from the cultivators rather than from intermediaries. However, both systems had their limitations and were criticized for various reasons, including the lack of incentives for improvement of land and the burden of taxation on the cultivators. Permanent Settlement UPSC The Permanent Settlement is an important topic in the history of British India and is likely to be covered in the UPSC (Union Public Service Commission) exams for the Indian Administrative Service (IAS) and other civil service exams in India. Some key points to remember about the Permanent Settlement in the context of UPSC exams include: - The Permanent Settlement was a land revenue system introduced in the Bengal Presidency of British India in 1793. - It was introduced by Lord Cornwallis and was intended to provide a stable and predictable source of revenue for the British East India Company. - Under the Permanent Settlement, the zamindars were made the owners of the land and were required to pay a fixed amount of revenue to the British government. - The Permanent Settlement was seen as more favorable to the zamindars, as they were able to keep the excess revenue from their lands and pass on the burden of taxation to the cultivators. - The Permanent Settlement was criticized for various reasons, including the lack of incentives for improvement of land and the burden of taxation on the cultivators. It is important to have a clear understanding of the Permanent Settlement and its implications in the context of the history of British India and the social and economic structures of the time. This can help you in answering questions related to the Permanent Settlement in the UPSC. The Permanent Settlement had both positive and negative impacts. On the positive side, it provided a stable and predictable source of revenue for the British East India Company and also allowed the zamindars to keep the excess revenue from their lands, which they could use for the development of their estates. However, the Permanent Settlement was also criticized for various reasons. It was seen as inequitable, as it placed the burden of taxation on the cultivators rather than on the zamindars. It also did not provide any incentives for the improvement of land, leading to stagnation and declining productivity in many areas. Overall, the Permanent Settlement was a significant event in the history of British India and had far-reaching consequences on the social and economic structures of the time. It is an important topic for students of history and for those preparing for exams such as the UPSC. The Mahalwari System – Working, Areas Of Operation and Drawbacks Tipu Sultan – History, Wars, Palace, Sword, Fort and Mosque Universe in Hindi – हमारा ब्रह्मांड
Not necessarily.0.5 < 1 < 1.5 So, the whole number 1 is more than the decimal fraction 0.5 but less than the decimal fraction 1.5 but the decimal fraction 0.5 is less than the whole number 1 while the decimal fraction 1.5 is more than the whole number 1. The result is less than the whole number and greater than or equal to the decimal. Unless the whole number is negative in which case the result is greater than the whole number and less than or equal to the decimal. decimal point to the left of the whole number. Example: 1= whole number. Less than 1 as a decimal is .00, .01, 02 etc. When you multiply a number by a decimal you are dividing. Nothing happens to the whole number. But the product is less than the whole number. The product might be another whole number, and it might have a fractional part. Uh . . . . . .no a fraction is less than a whole like a decimal. a number a whole. Therefore a fraction is less than a number. since the first number after the decimal is less than 5, round to whole number is 43 5 The nearest whole number is 5. When rounding to the nearest whole number, if the number following the decimal point is less than 5, the whole number remains the same, If the number following the decimal point is greater than 5, the nearest whole number is the next whole number. The piece after the decimal point is less than 1/2, so the nearest whole number is 1.0 . the answer is smaller than the whole number because you're taking a fraction of the second number. it's like multiplying by a decimal. 26 is greater. 26 is a whole number 0.3 is a decimal number, in decimal numbers the whole numbers go on the left side of the decimal point and the parts of a number (fractions) go on the right side of the decimal point. So . 3 is not even a whole number. Short answer: 105 Explanation: A whole number is one without a decimal. When you're rounding, take a look at the decimal. if the decimal is less than .50, then you round down to the current whole number, in this case 105. If the decimal is .50 or above, you round up to the next number, in this case, 106. Since the decimal is .27 we round down to 105. since the first number after the decimal is less than 5, round to 412 You may or you may not. If you divided by a decimal number that is greater than 1 then you will get a smaller number whereas if you divide by a number less than 1 then you will get a larger number. It depends on the signs of the two numbers.The answer is tricky when at least one number is negative because you have to remember that "less than" means "farther to the left on the number line" and NOT "greater in magnitude". E.g. -20 is less than -4 because -20 is farther to the left even though its magnitude (absolute value) is greater.There are four possible cases:Whole number and decimal are both positive: The product is less than the whole number. The decimal reduces the magnitude of the product, so the product is to the left of the whole number on the number line. E.g. 0.5 * 10 = 5, which is less than 10.Whole number positive, decimal negative. The product less than the whole number. A negative times a positive is ALWAYS negative, so regardless of its magnitude the product is to the left of the positive whole number on the number line. E.g. 15 * (-0.2) = -3 and -3 < 15Whole number negative, decimal negative. The product is greater than the whole number. The product is negative but like in Case 1, the magnitude of the product is smaller, so the product is to the right of the whole number on the number line. E.g. (-8) * 0.3 = -2.4 and -8 < -2.4Whole number negative, decimal negative. The product is greater than the whole number. A negative times a negative is positive, and ANY positive number is always greater than any negative number regardless of magnitude. E.g. (-0.25) * (-12) = +3 and -12 < +3 When you multiply or divide the whole equation by a negative number. The question cannot be answered because its assertion is not true.8 is a whole number, and -2.5 is a number which is less that 1. But 8/(-2.5) = -3.2 is not greater 8. You cannot. 0.4 is a decimal representation of a number whose absolute value is less than 1 It cannot, therefore, be changed to a whole number. Fraction refers to the amount to the right of the decimal point (amounts of less than 1), so if you multiply any (positive) number by a fraction its value will be reduced. Yes. -5 × 0.5 = -1 -1 is greater than -5. 8.6 is rounded to nine because the number after the decimal is greater than four, so the number is rounded higher. If the number after the decimal is five or greater, the number rounds up. If the number after the decimal is four or less, then the number rounds down. Any number that starts with 0 and a decimal point is going to be less than a whole number because it is less than one. 3 look at the first number after decimal; if less than 5 round down to nearest whole number For example, 3.4 nearest whole number is 3 For example, 3.7 nearest whole number is 4 A fraction or a decimal
Skip over navigation Skip over navigation Terms and conditions Guide and features Guide and features Featured Early Years Foundation Stage; US Kindergarten Featured UK Key Stage 1&2; US Grades 1-5 Featured UK Key Stage 3-5; US Grades 6-12 Featured UK Key Stage 1, US Grade 1 & 2 Featured UK Key Stage 2; US Grade 3-5 Featured UK Key Stages 3 & 4; US Grade 6-10 Featured UK Key Stage 4 & 5; US Grade 11 & 12 Science, Technology, Engineering and Mathematics Developing the Classroom Culture: Using the Dotty Six Activity as a Springboard for Investigation Stage: 1 and 2 Article by Jennie Pennant Published August 2013. This article supplements the more detailed article Developing a Classroom Culture That Supports a Problem-solving Approach to Mathematics This article suggests how to dig deeper into who answers questions in your classroom using the game could be a good game to try out in a staff meeting to support the development of classroom culture across the school. A. Play it together and then look for ways that you could adapt it to suit your class. For example, draw counters in the boxes to aid bonds, use different dice, make the box total different, use fraction dice with ¼, ½ and ¾ and the boxes need to add up to 2. B. Consider the focus of the learning for the lesson: the lesson objective. It may be that you want to choose a single objective – either a number one or a using and applying one - or a double objective – a number objective and a using and applying objective. Possible number objectives: • To recognise the number of dots in each iconic pattern and associate it with its number name and numeral • To match the iconic representation of the patterns on the die with their symbolic representation using numerals • To know number bonds to, and within, six • To develop an understanding of the concept of addition • To understand the associated language of ‘how many more?’ Possible using and applying objectives: Here we use the idea of a student being a ‘beginner’ at an element of using and applying and moving on to becoming ‘proficient’. Beyond that we use the term ‘expert’. Beginner: engage with practical mathematical activities Becoming proficient: adopt a systematic approach Beginner: respond to questions and ideas from peers and adults Beginner: refer to the materials they have used (the die and the grid) and talk about what they have done, what patterns they have noticed Becoming proficient: describe the strategies they used Beginner: explain numbers and calculations Becoming proficient: predict what could happen and give a reason C. Try out the game in your classroom: you may find some of the guidance below useful (scroll down to 'The lesson' section. D. Meet back as a staff and share your findings together. E. Decide on how you plan to develop your classroom cultures in the light of your investigations around Dotty Six. Other games where you could employ a similar approach and which could be used for a follow up staff meeting are: Shut the Box Show the students the video clip and ask them to work out the rules of . Collect together a class set of ideas. Refine and adjust them until you are all agreed on the rules. Let them try out the game themselves and consider whether any of the ideas below are useful to try out. They are based on three elements: asking, listening and responding. A useful strategy is to ask questions – open questions - that encourage the students to articulate their thinking. Open questions that could be useful are questions such as: • How many more dots do you need to fill that rectangle? • I think you need five more dots to fill that rectangle – am I right? • How many rectangles have you filled so far? • If you threw a three, which rectangle would you put the dots in? • I’ve thrown this ... which rectangle could that go in? • I’m wondering what to do with this score. Can you help me? • If I throw a six, how many spaces are left for me to put it in? It is helpful to use the mathematical vocabulary of 'rectangle' rather than 'box' in the question. When you do this, you are reinforcing key mathematical language for the students that they are in the process of learning how to use for themselves. Whenever we ask a student a question, we need to allow enough wait time for them to respond. Feeling under pressure to answer a question quickly can be very uncomfortable and can prevent us from being able to articulate our thoughts very clearly. Students learn to join in conversations by hearing what others are saying, listening to how words are being used and ‘playing around’ with those words themselves. This means that some modelling of talk around this game could be useful – between you and your Teaching Assistant, you and a puppet or you and one of the more articulate students in the class. You may also like to capture some key phrases and words that you hear students using as they talk and put them up on your mathematics 'talk wall' or other display to support the students to use those words. Putting the words inside ready-cut out speech bubbles can be very effective and create an appealing display. You may also like to stimulate some talk by joining in with a pair/group of students and ‘playing dumb’. For example, you could throw the die and then put more dots in a rectangle on the grid than there should be (you throw a four and put five dots) or you could put more dots in a rectangle than are needed to make the ‘full’ six. Listening carefully to what the students actually say is sometimes harder than we realise. We may not hear clearly what they say as we may be expecting them to give us a fixed answer that we have pre-determined – this can be called, ‘guess what is in the teacher’s head!’ We need to be ready to be open to their answers and be curious to understand what they are trying to say. Sometimes their answer may be part of a sentence and our temptation is to finish the sentence off for them. See what happens if you just repeat back to them what they have said, using the same words they have used, and see if that helps them to finish the sentence. It may be that their answer is rather jumbled or rambling. Our temptation, in this case, is to rephrase it, reorganise it and repeat it back to the class in what we consider to be its new, improved form. We may hear ourselves saying something like, ‘Thank you, Elspeth. What Elspeth said was ...’ See what happens, if instead, you check with the child, ‘Elspeth’ if you have heard what they said correctly by saying something such as, ‘I think what you said was ... Am I right?’ When saying what you thought they said try and use the same words that they used. After mastering the art of the open, starter question and listening carefully to the students’ responses, we then need to decide how we are going to respond to what they have said. Our aim is to understand as much we can about where they are at with their mathematical thinking and concept development. Often it is helpful to respond to them with another question, phrase or statement that gives us an opportunity to explore their thinking further. This can helps us probe for deeper understanding and evidence of mathematical thinking and reasoning. Here are some examples of possible follow up questions: How many more do you need to fill that rectangle? Are you sure? Show me how you know that. If you threw a three, which rectangle could you put that in? I am curious to know why you chose that one. I would choose this one ... Are we both right? I've thrown a six, what can I do? What could happen if I threw another six? How many sixes can I throw and still fit them on the board? Further ideas for the lesson can be found in the Teachers' Notes that accompany this game Shape, space & measures - generally Meet the team The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. Register for our mailing list Copyright © 1997 - 2017. University of Cambridge. All rights reserved. NRICH is part of the family of activities in the Millennium Mathematics Project
Have you wondered why we humans study statistics? Weren’t mathematics & all its various branches frightening enough? Well, the necessity of statistical interpretation stems from the need to observe and understand patterns, distributions and intrinsic behavior of datasets. Statistics enables us to analyze and study the nature of relationships between data and ascertain the accuracy of interpretive procedures. In simple words, statistics help us study the nature and behavior of some data set. With powerful descriptive and inferential techniques, stats provide us with excellent, in-depth analytical tools to explore data sets’ properties. One of the most common and potent statistical techniques for measuring the variability of data is the standard deviation procedure. This article intends to explore this simple but beneficial technique in details and look into its myriad applications. Let’s start with some definition. Understanding Standard Deviation TO understand the idea behind standard deviation, we first need to know about the concept of variability and its necessity in data analysis. For any data set or distribution, variability defines the spread of data in a particular distribution. A simple concept tells us how scattered, dispersed or spread out the data is. Above, we have a normal distribution, and the standard deviation defines its width or the spread of the data values concerning the midpoint of the dataset or the plot. Measures of central tendency such as mean, median and mode, allow statisticians to determine the center or midpoint of a distribution. Standard deviation is a measure of variability or the variation of data concerning the distribution‘s central point. The standard deviation is used frequently to study, describe, and make inferences about distributions along with range and interquartile range. From an application perspective, standard deviation allows us to discern vital underlying information or causes behind some data pattern. Using Standard Deviation As mentioned, the standard deviation allows statisticians to discern the variability of data from the central point. Let us find out how it does so. Calculate the standard deviation of the following distribution: 6, 8, 10, 12, 14, 16, 18, 20 As per the definition of standard deviation, we need to determine the variability of the distribution elements with respect to some central value or midpoint. The Mean or Expected Value is the midpoint value that specifies the dispersion or SD primarily. Mean is nothing but the average of the elements in a data set. For the above distribution, Let the dataset be denoted by X= 6, 8, 10, 12, 14, 16, 18, 20 and n= no. of elements in X - Mean or the Average ~= [∑ni=1Xi]/ n. The common notation or symbol of mean is µ. In this case, µ is approximately equal to 13. - Next up, we need to calculate the variance of the data. It is nothing more than the spread of each data point from the average or the mean, µ. The variance of X is denoted by σ and calculated as follows. σ= Var (X) = (1/n-1) * ( ∑ni=1 (Xi – µ)2 ) For this problem, the variance comes around to ((6-13)2 + (8-13)2 + (10-13)2 + (12-13)2 + (14-13)2 + (16-13)2 + (18-13)2 + (20-13)2 / (n-1) = (104)/7 = 24 - Standard Deviation is the square root variance. For our sum, we have √ σ= √(24) = 4.582 And, that is how we calculate the standard deviation for any data distribution. Standard deviation calculators follow the above steps & formulas to determine sample or population standard deviation. Larger standard deviations in data indicate greater variability in data, whereas smaller values indicate less variability. Normal Distributions and Standard Deviations Standard deviations are calculated from the mean, a measure of central tendency. If we plot the dispersion or variability, then we will obtain a bell-shaped or normal distribution curve. Standard deviation defines the width of any normal distribution curve: the extent of dispersion of the data elements from the midpoint (mean). Here’s the graph for the above data set done in Excel. As observable, the curve crests at the midpoint at 13, which is the mean or central tendency measure used to calculate the whole dataset’s standard deviation. Remember that not every data set follows a normal distribution pattern. The above curve is also known as the Probability Density Function curve, as it denotes a variable’s density over a given range. (that is, the nature of the data values in our set) Another curve called the cumulative distribution function curve shows the overall trend of change in the variable or a data set. For our problem, we obtain the following figure. Standard deviation is a powerful statistical tool and finds widespread applications in data science & machine learning. Applying Standard Deviation The central idea behind the standard deviation is observing the data’s tendency to deviate or disperse from the mean or expected value. Calculating the SD of a sample or a whole population can help identify trends & patterns pertaining to that data. - In biostatistics, variance and standard deviation are used to observe, ascertain and investigate some biological trait or phenomenon in a population. - Population census employs standard deviation to determine age, gender and other demographic parameters. - The standard deviation has huge applications on the business front. From predicting share prices to risk management, the technique is used to measure and estimate probable events. - In data science and business intelligence, the standard deviation is a rudimentary tool to study, analyze and mine data. Coupled with probability, statistical techniques such as standard deviation help business analysts and data scientist make accurate predictions from BIG amounts of DATA. - Data is central to AI, machine learning, deep learning and the like. Like in data science, statistical methods, like standard deviation & hypothesis testing, study massive amounts of data. These methods enable the extraction of insightful information from large datasets, which are then used to train & teach machine learning models. The standard deviation has been used to design loss functions in deep learning networks. Loss functions allow the AI system to fine-tune their predictions through careful analyses & comparison of training results. Standard deviation loss functions calculate the variability or deviation from the expected or desired value. The AI model then makes the necessary corrections to increase & improve the accuracy of results. Well, that brings us to the end of this write-up. Let’s hope this little article on standard deviation and descriptive statistics was informative & educative enough for you. Remember, hard work, practice and intelligence are crucial to becoming a stat pro. So give it your all and seek writing assistance from genuine statistics assignment writing services in case of any difficulty. All the best! Author-Bio: Ronald McLean is a statistics professor from a reputed university in Ohio, the USA. He is also an avid data science enthusiast, freelance writer, blogger and part-time tutor at MyAssignmenthelp.com, a leading digital academic writing service in the United States.
Upgrade to remove ads Chemical Reactions Vocabulary Terms in this set (17) Balanced Chemical Equation chemical equation with the same number of atoms of each element on both sides of the equation. substance that speeds up a chemical reaction without being permanently changed itself. shorthand method to describe chemical reactions using chemical formulas and symbols. process in which one or more substances are changed into new substances number in a chemical equation that represents the number of units of each substance taking part in a chemical reaction. chemical reaction in which one substance breaks down into two or more substances. chemical reaction that produces a precipitate, water, or a gas when two ionic compounds in a solution are combined. chemical reaction; requires energy to proceed chemical reaction that requires heat energy to proceed chemical reaction that releases some sort of energy such as light or heat. chemical energy in which energy is primarily given off in the form of heat. substance that slows down a chemical reaction or prevents it from occurring by combining with a reactant. insoluble compound that comes out of solution during a double displacement reaction. In a chemical reaction, the new substance that is formed. In a chemical reaction, the substance that reacts. chemical reaction in which one element replaces another element in a compound Chemical reaction in which two or more substances combine to form a different substance. THIS SET IS OFTEN IN FOLDERS WITH... Atomic Structure Vocabulary YOU MIGHT ALSO LIKE... MCAT General Chemistry | Kaplan Guide Ch. 21 Chemical reactions PS Chapter 21: Chemical Reactions Chemical Reactions- Chapter 23 OTHER SETS BY THIS CREATOR EOC Physical Science Vocab #4 EOC Physical Science Vocab #3 EOC Physical Science Vocab #2 EOC Physical Science Vocab #1
The conclusion contrasts with prior theories water come from comets or asteroids. The findings suggest our planet may have always been ‘wet’. Researchers from the Centre de Recherches Petrographiques et Geochimiques (CRPG, CNRS) determined a type of meteorite contains sufficient hydrogen to deliver at least three times the amount of water in the Earth’s oceans. Our discovery shows that the Earth’s building blocks might have significantly contributed to the Earth’s water Lead author Laurette Piani Enstatite chondrites meteorite are made-up of of material from the inner solar system. This is essentially the same matter that originally made up the Earth. Lead author Laurette Piani, a researcher at CPRG, said: “Our discovery shows that the Earth’s building blocks might have significantly contributed to the Earth’s water. “Hydrogen-bearing material was present in the inner solar system at the time of the rocky planet formation, even though the temperatures were too high for water to condense.” The findings from this study are all the more unexpected because the Earth’s building blocks are often presumed to be dry. They originate from inner zones of the solar system where temperatures were too high for water to condense and form other solids during planet formation. The meteorites offer tantalising evidence water did not necessarily come from far away. Lionel Vacher, a postdoctoral researcher at Washington University, said: “The most interesting part of the discovery for me is that enstatite chondrites, which were believed to be almost ‘dry’, contain an unexpectedly high abundance of water. Enstatite chondrites are rare, making up only about 2 percent of known meteorites in collections. However, their isotopic similarity to Earth make them particularly compelling. Enstatite chondrites have similar oxygen, titanium and calcium isotopes as Earth. This landmark new study revealed their hydrogen and nitrogen isotopes are also similar to Earth’s. In the study of extraterrestrial materials, the quantity of an element’s isotopes are a distinctive signature to identify where the element originated. Dr Vacher said: ”If enstatite chondrites were effectively the building blocks of our planet—as strongly suggested by their similar isotopic compositions—this result implies that these types of chondrites supplied enough water to Earth to explain the origin of Earth’s water, which is amazing!” The researchers also suggest a large amount of the atmospheric nitrogen in the Earth’s atmosphere could have come from the enstatite chondrites meteorites. Dr Piani said: “Only a few pristine enstatite chondrites exist: ones that were not altered on their asteroid nor on Earth. “In our study we have carefully selected the enstatite chondrite meteorites and applied a special analytical procedure to avoid being biased by the input of terrestrial water.” Coupling two analytical techniques—conventional mass spectrometry and secondary ion mass spectrometry (SIMS)—allowed researchers to precisely measure the content and composition of the small amounts of water in the meteorites. Dr Pianisaid prior to this study, “it was commonly assumed that these chondrites formed close to the sun. She added: ”Enstatite chondrites were thus commonly considered ‘dry,’ and this frequently reasserted assumption has probably prevented any exhaustive analyses to be done for hydrogen.” Source: Read Full Article
The history of the Polish-Lithuanian Commonwealth (1569-1648) covers a period in the history of Poland and Lithuania, before their joint state was subjected to devastating wars in the middle of the 17th century. The Union of Lublin of 1569 established the Polish-Lithuanian Commonwealth, a more closely unified federal state, replacing the previously existing personal union of the two countries. The Union was largely run by the Polish and increasingly Polonized Lithuanian and Ruthenian nobility, through the system of the central parliament and local assemblies, but from 1573 led by elected kings. The formal rule of the proportionally more numerous than in other European countries nobility constituted a sophisticated early democratic system, in contrast to the absolute monarchies prevalent at that time in the rest of Europe.[a] The Polish-Lithuanian Union had become an influential player in Europe and a vital cultural entity, spreading Western culture eastward. In the second half of the 16th and the first half of the 17th century, the Polish-Lithuanian Commonwealth was a huge state in central-eastern Europe, with an area approaching one million square kilometers. Following the Reformation gains (the Warsaw Confederation of 1573 was the culmination of the unique in Europe religious toleration processes), the Catholic Church embarked on an ideological counter-offensive and Counter-Reformation claimed many converts from Protestant circles. Disagreements over and difficulties with the assimilation of the eastern Ruthenian populations of the Commonwealth had become clearly discernible. At an earlier stage (from the late 16th century), they manifested themselves in the religious Union of Brest, which split the Eastern Christians of the Commonwealth, and on the military front, in a series of Cossack uprisings. The Commonwealth, assertive militarily under King Stephen Báthory, suffered from dynastic distractions during the reigns of the Vasa kings Sigismund III and W?adys?aw IV. It had also become a playground of internal conflicts, in which the kings, powerful magnats and factions of nobility were the main actors. The Commonwealth fought wars with Russia, Sweden and the Ottoman Empire. At the Commonwealth's height, some of its powerful neighbors experienced difficulties of their own and the Polish-Lithuanian state sought domination in Eastern Europe, in particular over Russia. Allied with the Habsburg Monarchy, it did not directly participate in the Thirty Years' War. Tsar Ivan IV of Russia undertook in 1577 hostilities in the Livonian region, which resulted in his takeover of most of the area and caused the Polish-Lithuanian involvement in the Livonian War. The successful counter-offensive led by King Báthory and Jan Zamoyski resulted in the peace of 1582 and the retaking of much of the territory contested with Russia, with the Swedish forces establishing themselves in the far north (Estonia). Estonia was declared a part of the Commonwealth by Sigismund III in 1600, which gave rise to a war with Sweden over Livonia; the war lasted until 1611 without producing a definite outcome. In 1600, as Russia was entering a period of instability, the Commonwealth proposed a union with the Russian state. This failed move was followed by many other similarly unsuccessful, often adventurous attempts, some involving military invasions, other dynastic and diplomatic manipulations and scheming. While the differences between the two societies and empires proved in the end too formidable to overcome, the Polish-Lithuanian state ended up in 1619, after the Truce of Deulino, with the greatest ever expansion of its territory. At the same time it was weakened by the huge military effort made. In 1620 the Ottoman Empire under Sultan Osman II declared a war against the Commonwealth. At the disastrous Battle of ?u?ora Hetman Stanis?aw ?ó?kiewski was killed and the Commonwealth's situation in respect to the Turkish-Tatar invasion forces became very precarious. A mobilization in Poland-Lithuania followed and when Hetman Jan Karol Chodkiewicz's army withstood fierce enemy assaults at the Battle of Khotyn (1621), the situation improved on the southeastern front. More warfare with the Ottomans followed in 1633-1634 and vast expanses of the Commonwealth had been subjected to Tatar incursions and slave-taking expeditions throughout the period. War with Sweden, now under Gustavus Adolphus, resumed in 1621 with his attack on Riga, followed by the Swedish occupation of much of Livonia, control of Baltic Sea coast up to Puck and the blockade of Danzig. The Commonwealth, exhausted by the warfare that had taken place elsewhere, in 1626-1627 mustered a response, utilizing the military talents of Hetman Stanis?aw Koniecpolski and help from Austria. Under pressure from several European powers, the campaign was stopped and ended in the Truce of Altmark, leaving in Swedish hands much of what Gustavus Adolphus had conquered. Another war with Russia followed in 1632 and was concluded without much change in the status quo. King W?adys?aw IV then proceeded to recover the lands lost to Sweden. At the conclusion of the hostilities, Sweden evacuated the cities and ports of Royal Prussia but kept most of Livonia. Courland, which had remained with the Commonwealth, assumed the servicing of Lithuania's Baltic trade. After Frederick William's last Prussian homage before the Polish king in 1641, the Commonwealth's position in regard to Prussia and its Hohenzollern rulers kept getting weaker. At the outset of the Polish-Lithuanian Commonwealth, in the second half of the 16th century, Poland-Lithuania became an elective monarchy, in which the king was elected by the hereditary nobility. This king would serve as the monarch until he died, at which time the country would have another election. This monarchy has been commonly referred to as a rzeczpospolita or republic, because of the high degree of influence wielded by the noble classes, often seen as a single non-homogenous class. In 1572, Sigismund II Augustus, the last king of the Jagiellonian dynasty, died without any heirs. The political system was not prepared for this eventuality, as there was no method of choosing a new king. After much debate it was determined that the entire nobility of Poland and Lithuania would decide who the king was to be. The nobility were to gather at Wola, near Warsaw, to vote in the royal election. The election of Polish kings lasted until the Partitions of Poland. The elected kings in chronological order were: Henry of Valois, Anna Jagiellon, Stephen Báthory, Sigismund III Vasa, W?adys?aw IV, John II Casimir, Michael Korybut Wi?niowiecki, John III Sobieski, Augustus II the Strong, Stanis?aw Leszczy?ski, Augustus III and Stanis?aw August Poniatowski. The first Polish royal election was held in 1573. The four men running for the office were Henry of Valois, who was the brother of King Charles IX of France, Tsar Ivan IV of Russia, Archduke Ernest of Austria, and King John III of Sweden. Henry of Valois ended up a winner. But after serving as the Polish king for only four months, he received the news that his brother, the King of France, had died. Henry of Valois then abandoned his Polish post and went back to France, where he succeeded to the throne as Henry III of France. A few of the elected kings left a lasting mark in the Commonwealth. Stephen Báthory was determined to reassert the deteriorated royal prerogative, at the cost of alienating the powerful noble families. Sigismund III, W?adys?aw IV and John Casimir were all of the Swedish House of Vasa; preoccupation with foreign and dynastic affairs prevented them from making a major contribution to the stability of Poland-Lithuania. John III Sobieski commanded the allied Relief of Vienna operation in 1683, which turned out to be the last great victory of the "Republic of Both Nations". Stanis?aw August Poniatowski, the last of the Polish kings, was a controversial figure. On the one hand he was a driving force behind the substantial and constructive reforms belatedly undertaken by the Commonwealth. On the other, by his weakness and lack of resolve, especially in dealing with imperial Russia, he doomed the reforms together with the country they were supposed to help. The Polish-Lithuanian Commonwealth, following the Union of Lublin, became a counterpoint to the absolute monarchies gaining power in Europe. Its quasi-democratic political system of Golden Liberty, albeit limited to nobility, was mostly unprecedented in the history of Europe. In itself, it constituted a fundamental precedent for the later development of European constitutional monarchies. However the series of power struggles between the lesser nobility (szlachta), the higher nobility (magnates), and elected kings, undermined citizenship values and gradually eroded the government's authority, ability to function and provide for national defense. The infamous liberum veto procedure was used to paralyze parliamentary proceedings beginning in the second half of the 17th century. After the series of devastating wars in the middle of the 17th century (most notably the Chmielnicki Uprising and the Deluge), Poland-Lithuania stopped being an influential player in the politics of Europe. During the wars the Commonwealth lost an estimated 1/3 of its population (higher losses than during World War II). Its economy and growth were further damaged by the nobility's reliance on agriculture and serfdom, which, combined with the weakness of the urban burgher class, delayed the industrialization of the country. By the beginning of the 18th century, the Polish-Lithuanian Commonwealth, one of the largest and most populous European states, was little more than a pawn of its neighbors (the Russian Empire, Prussia and Austria), who interfered in its domestic politics almost at will. In the second half of the 18th century, the Commonwealth was repeatedly partitioned by the neighboring powers and ceased to exist. The agricultural trade boom in Eastern Europe showed the first signs of the approaching crisis in the 1580s, when food prices stopped increasing. It was followed by a gradual decline in agricultural products prices, a price depression, initially present in Western Europe. The negative consequences of this process on folwark economies of the East had reached its culmination in the second half of the 17th century. Further economic aggravation resulted from Europe-wide devaluation of the currency around 1620, caused by the influx of silver from the Western Hemisphere. At that time however massive amounts of Polish grain were still exported through Danzig (Gda?sk). The Commonwealth nobility took a variety of steps to combat the crisis and keep up high production levels, burdening in particular the serfs with further heavy obligations. The nobles were also forcibly buying or taking over properties of the more affluent thus far peasant categories, a phenomenon especially pronounced from the mid 17th century. Capital and energy of urban enterprisers affected the development of mining and metallurgy during the earlier Commonwealth period. There were several hundred hammersmith shops at the turn of the 17th century. Great ironworks furnaces were built in the first half of that century. Mining and metallurgy of silver, copper and lead had also been developed. Expansion of salt production was taking place in Wieliczka, Bochnia and elsewhere. After about 1700 some of the industrial enterprises were increasingly being taken over by land owners who used serf labor, which led to their neglect and decline in the second half of the 17th century. Danzig had remained practically autonomous and adamant about protecting its status and foreign trade monopoly. The Karnkowski Statutes of 1570 gave Polish kings the control over maritime commerce, but not even Stephen Báthory, who resorted to an armed intervention against the city, was able to enforce them. Other Polish cities held steady and prosperous through the first half of the 17th century. War disasters in the middle of that century devastated the urban classes. A rigid social separation legal system, intended to prevent any inter-class mobility, matured around the first half of the 17th century. But the nobility's goal of becoming self-contained and impermeable to newcomers had never been fully realized, as in practice even peasants on occasions acquired the noble status. Later numerous Polish szlachta clans had had such "illegitimate" beginnings. Szlachta found justification for their self-appointed dominant role in a peculiar set of attitudes, known as sarmatism, that they had adopted. The Union of Lublin accelerated the process of massive Polonization of Lithuanian and Rus' elites and general nobility in Lithuania and the eastern borderlands, the process that retarded national development of local populations there. In 1563, Sigismund Augustus belatedly allowed the Eastern Orthodox Lithuanian nobility access to highest offices in the Duchy, but by that time the act was of little practical consequence, as there were few Orthodox nobles of any standing left and the encroaching Catholic Counter-Reformation would soon nullify the gains. Many magnate families of the east were of Ruthenian origin; their inclusion in the enlarged Crown made the magnate class much stronger politically and economically. Regular szlachta, increasingly dominated by the great land owners, lacked the will to align themselves with Cossack settlers in Ukraine to counterbalance the magnate power, and in the area of Cossack acceptance, integration and rights resorted to delayed and ineffective half-measures. The peasantry was being subjected to heavier burdens and more oppression. For those reasons, the way in which the Polish-Lithuanian Commonwealth expansion took place and developed had caused an aggravation of both the social and national tensions, introduced a fundamental instability into the system, and ultimately resulted in the future crises of the "Republic of Nobles". The increasingly uniform and polonized (in the case of ethnic minorities) szlachta of the Commonwealth for the most part returned to the Roman Catholic religion, or if already Catholic remained Catholic, in the course of the 17th century. Already the Sandomierz Agreement of 1570, which was an early expression of Protestant irenicism later prominent in Europe and Poland, had a self-defensive character, because of the intensification of Counter-Reformation pressure at that time. The agreement strengthened the Protestant position and made the Warsaw Confederation religious freedom guarantees in 1573 possible. At the heyday of Reformation in the Commonwealth, at the end of the 16th century, there were about one thousand Protestant congregations, nearly half of them Calvinist. Half a century later, only 50% of them had survived, with the burgher Lutheranism suffering lesser losses, the szlachta dominated Calvinism and Nontrinitarianism (Polish Brethren) the greatest. The closing of the Brethren Racovian Academy and a printing facility in Raków on charges of blasphemy in 1638 forewarned of more trouble to come. This Counter-Reformation offensive happened somewhat mysteriously in a country, where there were no religious wars and the state had not cooperated with the Catholic Church in eradicating or limiting competing denominations. Among the factors responsible, low Protestant involvement among the masses, especially of peasantry, pro-Catholic position of the kings, low level of involvement of the nobility once the religious emancipation had been accomplished, internal divisions within the Protestant movement, and the rising intensity of the Catholic Church propaganda, have been listed. The ideological war between the Protestant and Catholic camps at first enriched the intellectual life of the Commonwealth. The Catholic Church responded to the challenges with internal reform, following the directions of the Council of Trent, officially accepted by the Polish Church in 1577, but implemented not until after 1589 and throughout the 17th century. There were earlier efforts of reform, originating from the lower clergy, and from about 1551 by Bishop Stanislaus Hosius (Stanis?aw Hozjusz) of Warmia, a lone at that time among the Church hierarchy, but ardent reformer. At the turn of the 17th century, a number of Rome educated bishops took over the Church administration at the diocese level, clergy discipline was implemented and rapid intensification of Counter-Reformation activities took place. Hosius brought to Poland the Jesuits and founded for them a college in Braniewo in 1564. Numerous Jesuit educational institutions and residencies were established in the following decades, most often in the vicinity of centers of Protestant activity. Jesuit priests were carefully selected, well educated, of both noble and urban origins. They had soon become highly influential with the royal court, while working hard within all segments of the society. The Jesuit educational programs and Counter-Reformation propaganda utilized many innovative media techniques, often custom-tailored for a particular audience on hand, as well as time-tried methods of humanist instruction. Preacher Piotr Skarga and Bible translator Jakub Wujek count among prominent Jesuit personalities. Catholic efforts to win the population countered the Protestant idea of a national church with Polonization, or nationalization of the Catholic Church in the Commonwealth, introducing a variety of native elements to make it more accessible and attractive to the masses. The Church hierarchy went along with the notion. The changes that took place during the 17th century defined the character of Polish Catholicism for centuries to come. The apex of the Counter-Reformation activity had fallen on the turn of the 17th century, the earlier years of the reign of Sigismund III Vasa (Zygmunt III Waza), who in cooperation with the Jesuits and some other Church circles attempted to strengthen the power of his monarchy. The King tried to limit access to higher offices to Catholics. Anti-Protestant riots took place in some cities. During the Sandomierz Rebellion of 1606 the Protestants supported the anti-King opposition in large numbers. Nevertheless, the massive wave of szlachta's return to Catholicism could not have been stopped. Although attempts were made during the common Protestant-Orthodox congregations in Toru? in 1595 and in Vilnius in 1599, the failure of the Protestant movement to form an alliance with the Eastern Orthodox Christians, the inhabitants of the eastern portion of the Commonwealth, contributed to the Protestants' downfall. The Polish Catholic establishment would not miss the opportunity to form a union with the Orthodox, although their goal was rather the subjugation of the Eastern Rite Christians to the pope (the papacy solicited help in bringing the "schism" under control) and the Commonwealth's Catholic centers of power. The Orthodox establishment was perceived as a security threat, because of the Eastern Rite bishops dependence on the Patriarchate of Constantinople at the time of an aggravating conflict with the Ottoman Empire, and because of the recent development, the establishment in 1589 of the Moscow Patriarchate. The Patriarchate of Moscow then claimed ecclesiastical jurisdiction over the Orthodox Christians of the Polish-Lithuanian Commonwealth, which to many of them was a worrisome development, motivating them to accept the alternate option of union with the West. The union idea had the support of King Sigismund III and the Polish nobility in the east; opinions were divided among the church and lay leaders of the Eastern Orthodox faith. The Union of Brest act was negotiated and solemnly concluded in 1595-1596. It had not merged the Roman Catholic and Eastern Orthodox denominations, but led to the establishment of the Slavic language liturgy Uniate Church, which was to become an Eastern Catholic Church, one of the Greek Catholic Churches (presently Ukrainian Greek Catholic and Belarusian Greek Catholic). The new church, of the Byzantine Rite, accepted papal supremacy, while it retained in most respects its Eastern Rite character. The compromise union was flawed from the beginning, because despite the initial agreement, the Greek-Catholic bishops were not, like their Roman Catholic counterparts, seated in the Senate, and the Eastern Rite participants of the union had not been granted full general equality they expected. The Union of Brest increased antagonisms among the Belarusian and Ukrainian communities of the Commonwealth, within which the Orthodox Church had remained the most potent religious force. It added to the already prominent ethnic and class fragmentation and became one more reason for internal infighting that was to impair the Republic. The Eastern Orthodox nobility, branded "Disuniates" and deprived of legal standing, led by Konstanty Ostrogski commenced a fight for their rights. Prince Ostrogski had been a leader of an Orthodox intellectual revival in Polish Ukraine. In 1576, he founded an elite liberal arts secondary and academic school, the Ostroh Academy, with trilingual instruction. In 1581, he and his academy were instrumental in the publication of the Ostroh Bible, the Bible's first scholarly Orthodox Church Slavonic edition. As a result of the efforts, parliamentary statutes of 1607, 1609 and 1635 recognized the Orthodox religion again, as one of the two equal Eastern churches. The restoration of Orthodox hierarchy and administrative structure proved difficult (most bishops had become Uniates, and their Orthodox replacements of 1620 and 1621 were not recognized by the Commonwealth) and was officially done only during the reign of W?adys?aw IV. W?adys?aw, facing the Cossack rebellions, put an end to decades of efforts aimed at using the Uniate Church as an instrument of attempted elimination of the Orthodox religion.[b] By that time many of the Orthodox nobles had become Catholics, and the Orthodox leadership fell into the hands of townspeople and lesser nobility organized into church brotherhoods, and the new power in the east, the Cossack warrior class. Metropolitan Peter Mogila of Kiev, who organized an influential academy there, contributed greatly to the rebuilding and reform of the Orthodox Church. The Uniate Church, created for the Ruthenian population of the Commonwealth, in its administrative dealings gradually switched to the Polish language use. From about 1650, the majority of the Church's archival documents generated were in the Polish, rather than in the otherwise used Ruthenian (its Chancery Slavonic variety), language. The Baroque style dominated the Polish culture from the 1580s, building on the achievements of the Renaissance and for a while coexisting with it, to the mid 18th century. Initially Baroque artists and intellectuals, torn between the two competing views of the world, enjoyed wide latitude and freedom of expression. Soon however the Counter-Reformation instituted a binding point of view that invoked the medieval tradition, imposed censorship in education and elsewhere (the index of prohibited books in Poland from 1617), and straightened out their convoluted ways. By the middle of the 17th century the doctrine had been firmly reestablished, sarmatism and religious zealotry had become the norm. Artistic tastes of the epoch were often acquiring an increasingly Oriental character. In contrast with the integrative tendencies of the previous period, the burgher and nobility cultural spheres went their separate ways. Renaissance publicist Stanis?aw Orzechowski had already provided the foundations for Baroque szlachta's political thinking. At that time there were about forty Jesuit colleges (secondary schools) scattered throughout the Commonwealth. They were educating mostly szlachta, burgher sons to a lesser degree. Jan Zamoyski, Chancellor of the Crown, who built the town of Zamo, established an academy there in 1594; it had functioned as a gymnasium only after Zamoyski's death. The first two Vasa kings were well known for patronizing both the arts and sciences. After that the Commonwealth's science experienced general decline, which paralleled the wartime decline of the burgher class. By the mid 16th century Poland's university, the Academy of Kraków, entered a crisis stage, and by the early 17th century regressed into Counter-reformational conformism. The Jesuits took advantage of the infighting and established in 1579 a university college in Vilnius, but their efforts aimed at taking over the Academy were unsuccessful. Under the circumstances many elected to pursue their studies abroad. Jan Bro?ek, a rector of the Kraków University, was a multidisciplinary scholar who worked on number theory and promoted Copernicus' work. He was banned by the Church in 1616 and his anti-Jesuit pamphlet was publicly burned. Bro?ek's co-worker, Stanis?aw Pud?owski, worked on a system of measurements based on physical phenomena. Micha? S?dziwój (Sendivogius Polonus) was a famous in Europe alchemist, who wrote a number of treatises in several languages, beginning with Novum Lumen Chymicum (1604, with over fifty editions and translations in the 17th and 18th centuries). A member of Emperor Rudolph II's circle of scientists and sages, he is believed by some authorities to have been a pioneer chemist and a discoverer of oxygen, long before Lavoisier (Sendivogius' works were studied by leading scientists, including Isaac Newton). The early Baroque period produced a number of noted poets. Sebastian Grabowiecki wrote metaphysical and mystical religious poetry representing the passive current of Quietism. Another szlachta poet Samuel Twardowski participated in military and other historic events; among the genres he pursued was epic poetry. Urban poetry was quite vital until the middle of the 17th century; the plebeian poets criticized the existing social order and continued within the ambiance of elements of the Renaissance style. The creations of John of Kijany contained a hearty dose of social radicalism. The moralist Sebastian Klonowic wrote a symbolic poem Flis using the setting of Vistula river craft floating work. Szymon Szymonowic in his Pastorals portrayed, without embellishments, the hardships of serf life. Maciej Sarbiewski, a Jesuit, was highly appreciated throughout Europe for the Latin poetry he wrote. The preeminent prose of the period was written by Piotr Skarga, the preacher-orator. In his Sejm Sermons Skarga severely criticized the nobility and the state, while expressing his support for a system based on strong monarchy. Writing of memoirs had become most highly developed in the 17th century. Peregrination to the Holy Land by Miko?aj Radziwi and Beginning and Progress of the Muscovy War written by Stanis?aw ?ó?kiewski, one of the greatest Polish military commanders, are the best known examples. One form of art particularly apt for Baroque purposes was the theater. Various theatrical shows were most often staged in conjunction with religious occasions and moralizing, and commonly utilized folk stylization. School theaters had become common among both the Protestant and Catholic secondary schools. A permanent court theater with an orchestra was established by W?adys?aw IV at the Royal Castle in Warsaw in 1637; the actor troupe, dominated by Italians, performed primarily Italian opera and ballet repertoire. Music, both sacral and secular, kept developing during the Baroque period. High quality church pipe organs were built in churches from the 17th century; a fine specimen has been preserved in Le?ajsk. Sigismund III supported an internationally renowned ensemble of sixty musicians. Working with that orchestra were Adam Jarz?bski and his contemporary Marcin Mielczewski, chief composers of the courts of Sigismund III and W?adys?aw IV. Jan Aleksander Gorczyn, a royal secretary, published in 1647 a popular music tutorial for beginners. Between 1580 and 1600 Jan Zamoyski commissioned the Venetian architect Bernardo Morando to build the city of Zamo. The town and its fortifications were designed to consistently implement the Renaissance and Mannerism aesthetic paradigms. Mannerism is the name sometimes given to the period in art history during which the late Renaissance coexisted with the early Baroque, in Poland the last quarter of the 16th century and the first quarter of the 17th century. Polish art remained influenced by the Italian centers, increasingly Rome, and increasingly by the art of the Netherlands. As a fusion of imported and local elements, it evolved into an original Polish form of the Baroque. The Baroque art was developing to a great extent under the patronage of the Catholic Church, which utilized the art to facilitate religious influence, allocating for this purpose the very substantial financial resources at its disposal. The most important in this context art form was architecture, with features rather austere at first, accompanied in due time by progressively more elaborate and lavish facade and interior design concepts. Beginning in the 1580s, a number of churches patterned after the Church of the Gesù in Rome had been built. Gothic and other older churches were increasingly being supplemented with Baroque style architectural additions, sculptures, wall paintings and other ornaments, which is conspicuous in many Polish churches today. The Royal Castle in Warsaw, after 1596 the main residence of the monarchs, was enlarged and rebuilt around 1611. The Ujazdów Castle (1620s) of the Polish kings turned out to be architecturally more influential, its design having been followed by a number of Baroque magnate residencies. The role of Baroque sculpture was usually subordinate, as decorative elements of exteriors and interiors, and on tombstones. A famous exception is the Sigismund's Column of Sigismund III Vasa (1644) in front of Warsaw's Royal Castle. Realistic religious painting, sometimes entire series of related works, served its didactic purpose. Nudity and mythological themes were banned, but other than that fancy collection of Western paintings were in vogue. Sigismund III brought from Venice Tommaso Dolabella. A prolific painter, he was to spend the rest of his life in Kraków and give rise to a school of Polish painters working under his influence. Danzig (Gda?sk) was also a center for graphic arts; painters Herman Han and Bartholomäus Strobel worked there, and so did Willem Hondius and Jeremias Falck, who were engravers. As compared with the previous century, even wider circles of the society participated in cultural activities, but Catholic Counter-Reformation pressure resulted in diminished diversity. Catastrophic wars in the middle of the century greatly weakened the Commonwealth's cultural development and influence in the region. After the Union of Lublin, the Senate of general sejm of the Commonwealth became augmented by Lithuanian high officials; the position of the lay and ecclesiastical lords, who served for life as members of the Senate was strengthened, as the already outnumbered middle szlachta high office holders had now proportionally fewer representatives in the upper chamber. The Senate could also be convened separately by the king in its traditional capacity of the royal council, apart from any sejm's formal deliberations, and szlachta's attempts to limit the upper chamber's role had not been successful. After the formal union and the addition of deputies from the Grand Duchy, and Royal Prussia, also more fully integrated with the Crown in 1569, there were about 170 regional deputies in the lower chamber (referred to as the Sejm) and 140 senators. Sejm deputies doing legislative work were generally not able to act as they pleased. Regional szlachta assemblies, the sejmiks, were summoned before sessions of general sejm; there the local nobility provided their representatives with copious instructions on how to proceed and protect the interests of the area involved. Another sejmik was called after the Sejm's conclusion. At that time the deputies would report to their constituency on what had been accomplished. Sejmiks had become an important part of the Commonwealth's parliamentary life, complementing the role of general sejm. They sometimes provided detailed implementations for general proclamations of sejms, or made legislative decisions during periods when the Sejm was not in session, at times communicating directly with the monarch. There was little significant parliamentary representation for the burgher class, and none for the peasants. The Jewish communities sent representatives to their own Va'ad, or Council of Four Lands. The narrow social base of the Commonwealth's parliamentary system was detrimental to its future development and the future of the Polish-Lithuanian statehood. From 1573 an "ordinary" general sejm was to be convened every two years, for a period of six weeks. A king could summon an "extraordinary" sejm for two weeks, as necessitated by circumstances; an extraordinary sejm could be prolonged if the parliamentarians assented. After the Union the Sejm of the Republic deliberated in more centrally located Warsaw, except that Kraków had remained the location of coronation sejms. The turn of the 17th century brought also a permanent migration of the royal court from Kraków to Warsaw. The order of sejm proceedings was formalized in the 17th century. The lower chamber would do most of the statute preparation work. The last several days were spent working together with the Senate and the king, when the final versions were agreed upon and decisions made; the finished legislative product had to have the consent of all three legislating estates of the realm, the Sejm, the Senate, and the monarch. The lower chamber's rule of unanimity had not been rigorously enforced during the first half of the 17th century. General sejm was the highest organ of collective and consensus-based state power. The Sejm's supreme court, presided over by the king, decided the most serious of legal cases. During the second half of the 17th century, for a variety of reasons, including abuse of the unanimity rule (liberum veto), sejm's effectiveness had declined, and the void was being increasingly filled by sejmiks, where in practice the bulk of government's work was getting done. The system of noble democracy became more firmly rooted during the first interregnum, after the death of Sigismund II Augustus, who following the Union of Lublin wanted to reassert his personal power, rather than become an executor of szlachta's will. A lack of agreement concerning the method and timing of the election of his successor was one of the casualties of the situation, and the conflict strengthened the Senate-magnate camp. After the monarch's 1572 death, to protect its common interests, szlachta moved to establish territorial confederations (kapturs) as provincial governments, through which public order was protected and basic court system provided. The magnates were able to push through their candidacy for the interrex or regent to hold the office until a new king is sworn, in the person of the primate, Jakub Ucha?ski. The Senate took over the election preparations. The establishment's proposition of universal szlachta participation (rather than election by the Sejm) appeared at that time to be the right idea to most szlachta factions; in reality, during this first as well as subsequent elections, the magnates subordinated and directed, especially the poorer of szlachta. During the interregnum the szlachta prepared a set of rules and limitations for the future monarch to obey as a safeguard to ensure that the new king, who was going to be a foreigner, complied with the peculiarities of the Commonwealth's political system and respected the privileges of the nobility. As Henry of Valois was the first one to sign the rules, they became known as the Henrician Articles. The articles also specified the wolna elekcja (free election) as the only way for any monarch's successor to assume the office, thus precluding any possibility of hereditary monarchy in the future. The Henrician Articles summarized the accumulated rights of Polish nobility, including religious freedom guarantees, and introduced further restrictions on the elective king; as if that were not enough, Henry also signed the so-called pacta conventa, through which he accepted additional specific obligations. Newly crowned Henry soon embarked on a course of action intended to free him from all the encumbrances imposed, but the outcome of this power struggle was never to be determined. One year after the election, in June 1574, upon learning of his brother's death, Henry secretly left for France. In 1575 the nobility commenced a new election process. The magnates tried to force the candidacy of Emperor Maximilian II, and on 12 December Archbishop Ucha?ski even announced his election. This effort was thwarted by the execution movement szlachta party led by Miko?aj Sienicki and Jan Zamoyski; their choice was Stephen Báthory, Prince of Transylvania. Sienicki quickly arranged for a 15 December proclamation of Anna Jagiellon, sister of Sigismund Augustus, as the reigning queen, with Stefan Batory added as her husband and king jure uxoris. Szlachta's pospolite ruszenie supported the selection with their arms. Batory took over Kraków, where the couple's crowning ceremony took place on 1 May 1576. Stephen Báthory's reign marks the end of szlachta's reform movement. The foreign king was skeptical of the Polish parliamentary system and had little appreciation for what the execution movement activists had been trying to accomplish. Batory's relations with Sienicki soon deteriorated, while other szlachta leaders had advanced within the nobility ranks, becoming senators or being otherwise preoccupied with their own careers. The reformers managed to move in 1578 in Poland and in 1581 in Lithuania the out-of-date appellate court system from the monarch's domain to the Crown and Lithuanian Tribunals run by the nobility. The cumbersome sejm and sejmiks system, the ad hoc confederations, and the lack of efficient mechanisms for the implementation of the laws escaped the reformers' attention or will to persevere. Many thought that the glorified nobility rule had approached perfection. Jan Zamoyski, one of the most distinguished personalities of the period, became the king's principal adviser and manager. A highly educated and cultivated individual, talented military chief and accomplished politician, he had often promoted himself as a tribune of his fellow szlachta. In fact in a typical magnate manner, Zamoyski accumulated multiple offices and royal land grants, removing himself far from the reform movement ideals he professed earlier. The king himself was a great military leader and far-sighted politician. Of Batory's confrontations with members of the nobility, the famous case involved the Zborowski brothers: Samuel was executed on Zamoyski's orders, Krzysztof was sentenced to banishment and property confiscation by the sejm court. A Hungarian, like other foreign rulers of Poland, Batory was concerned with the affairs of the country of his origin. Batory failed to enforce the Karnkowski's Statutes and therefore was unable to control the foreign trade through Danzig (Gda?sk), which was to have highly negative economic and political consequences for the Republic. In cooperation with his chancellor and later hetman Jan Zamoyski, he was largely successful in the Livonian war. At that time the Commonwealth was able to increase the magnitude of its military effort: The combined for a campaign armed forces from several sources available could be up to 60,000 men strong. King Batory initiated the creation of piechota wybraniecka, an important peasant infantry military formation. In 1577 Batory agreed to George Frederick of Brandenburg becoming a custodian for the mentally ill Albert Frederick, Duke of Prussia, which brought the two German polities closer together, to the detriment of the Commonwealth's long-term interests. King Sigismund Augustus' Dominium Maris Baltici program, aimed at securing Poland's access to and control over the portion of the Baltic region and ports that the country had vital interests in protecting, led to the Commonwealth's participation in the Livonian conflict, which had also become another stage in the series of Lithuania's and Poland's confrontations with Russia. In 1563 Ivan IV took Polotsk. After the Stettin peace of 1570 (which involved several powers, including Sweden and Denmark) the Commonwealth remained in control of the main part of Livonia, including Riga and Pernau. In 1577 Ivan undertook a great expedition, taking over for himself, or his vassal Magnus, Duke of Holstein most of Livonia, except for the coastal areas of Riga and Reval. A success of the Polish-Lithuanian counter-offensive became possible as Batory was able to secure the necessary funding from the nobility. The Polish forces recovered Dünaburg and most of middle Livonia. The King and Zamoyski then opted for attacking directly the inland Russian territory necessary for keeping Russian communication lines to Livonia open and functioning. Polotsk was retaken in 1579 and the Velikiye ?uki fortress fell in 1580. The take-over of Pskov was attempted in 1581, but Ivan Petrovich Shuisky was able to defend the city despite a several months long siege. An armistice was arranged in Jam Zapolski in 1582 by the papal legate Antonio Possevino. The Russians evacuated all the Livonian castles they had captured, gave up the Polotsk area and left Velizh in Lithuanian hands. The Swedish forces, which took over Narva and most of Estonia, contributed to the victory. The Commonwealth ended up with the possession of the continuous Baltic coast from Puck to Pernau. There were several candidates for the Commonwealth crown considered after the death of Stephen Báthory, including Archduke Maximilian of Austria. Anna Jagiellon proposed and pushed for the election of her nephew Sigismund Vasa, son of John III, King of Sweden and Catherine Jagellon and the Swedish heir apparent. The Zamoyski faction supported Sigismund, the faction led by the Zborowski family wanted Maximilian; two separate elections took place and a civil war resulted. The Habsburg's army entered Poland and attacked Kraków, but was repulsed there and then, while retreating in Silesia, crushed by the forces organized by Jan Zamoyski at the Battle of Byczyna (1588), where Maximilian was taken prisoner. In the meantime Sigismund also arrived and was crowned in Kraków, which initiated his long in the Commonwealth (1587-1632) reign as Zygmunt III Waza. The prospect of a personal union with Sweden raised for the Polish and Lithuanian ruling circles political and economic hopes, including favorable Baltic trade conditions and a common front against Russia's expansion. However concerning the latter, the control of Estonia had soon become the bone of contention. Sigismund's ultra-Catholicism appeared threatening to the Swedish Protestant establishment and contributed to his dethronement in Sweden in 1599. Inclined to form an alliance with the Habsburgs (and even give up the Polish crown to pursue his ambitions in Sweden), Sigismund conducted secret negotiations with them and married Archduchess Anna. Accused by Zamoyski of breaking his covenants, Sigismund III was humiliated during the sejm of 1592, which deepened his resentment of szlachta. Sigismund was bent on strengthening the power of the monarchy and Counter-Reformational promotion of the Catholic Church (Piotr Skarga was among his supporters). Indifferent to the increasingly common breaches of the Warsaw Confederation religious protections and instances of violence against the Protestants, the King was opposed by religious minorities. 1605-1607 brought fruitless confrontation between King Sigismund with his supporters and the coalition of opposition nobility. During the sejm of 1605 the royal court proposed a fundamental reform of the body itself, an adoption of the majority rule instead of the traditional practice of unanimous acclamation by all deputies present. Jan Zamoyski in his last public address reduced himself to a defense of szlachta prerogatives, thus setting the stage for the demagoguery that was to dominate the Commonwealth's political culture for many decades. For the sejm of 1606 the royal faction, hoping to take advantage of the glorious Battle of Kircholm victory and other successes, submitted a more comprehensive constructive reform program. Instead the sejm had become preoccupied with the dissident postulate of prosecuting instigators of religious disturbances directed against non-Catholics; advised by Skarga, the King refused his assent to the proposed statute. The nobility opposition, suspecting an attempt against their liberties, called for a rokosz, or an armed confederation. Tens of thousands of disaffected szlachta, led by the ultra-Catholic Miko?aj Zebrzydowski and Calvinist Janusz Radziwi, congregated in August near Sandomierz, giving rise to the so-called Zebrzydowski Rebellion. The Sandomierz articles produced by the rebels were concerned mostly with placing further limitations on the monarch's power. Threatened by royal forces under Stanis?aw ?ó?kiewski, the confederates entered into an agreement with Sigismund, but then backed out of it and demanded the King's deposition. The ensuing civil war was resolved at the Battle of Guzów, where the szlachta was defeated in 1607. Afterwards however magnate leaders of the pro-King faction made sure that Sigismund's position would remain precarious, leaving arbitration powers within the Senate's competence. Whatever was left of the execution movement had become thwarted together with the obstructionist szlachta elements, and a compromise solution to the crisis of authority was arrived at. But the victorious lords of the council had at their disposal no effective political machinery necessary to propagate the well-being of the Commonwealth, still in its Golden Age (or as some prefer Silver Age now), much further. In 1611 John Sigismund, Elector of Brandenburg was allowed by the Commonwealth sejm to inherit the Duchy of Prussia fief, after the death of Albert Frederick, the last duke of the Prussian Hohenzollern line. The Brandenburg Hohenzollern branch led the Duchy from 1618. The reforms of the execution movement had clearly established the Sejm as the central and dominant organ of state power. But this situation in reality had not lasted very long, as various destructive decentralizing tendencies, steps taken by the szlachta and the kings, were progressively undermining and eroding the functionality and primacy of the central legislative organ. The resulting void was being filled during the late 16th and 17th centuries by the increasingly active and assertive territorial sejmiks, which provided a more accessible and direct forum for szlachta activists to promote their narrowly conceived local interests. Sejmiks established effective controls, in practice limiting the Sejm's authority; themselves they were taking on an ever-broader range of state matters and local issues. In addition to the destabilizing to the central authority role of the over 70 sejmiks, during the same period, the often unpaid army had begun establishing their own "confederations", or rebellions. By plunder and terror they attempted to recover their compensation and pursue other, sometimes political aims. Some reforms were being pursued by the more enlightened szlachta, who wanted to expand the role of the Sejm at the monarch's and magnate faction's expense, and by the elected kings. Sigismund III during the later part of his rule constructively cooperated with the Sejm, making sure that between 1616 and 1632 each session of the body produced the badly needed statutes. The increased efforts in the areas of taxation and maintenance of the military forces made possible the positive outcomes of some of the armed conflicts that took place during Sigismund's reign. There weren't very many Cossacks in the mid 16th century in the south-eastern borderlands of Lithuania and Poland yet, but the first companies of Cossack light cavalry had become incorporated into the Polish armed forces already around that time. During the reign of Sigismund III Vasa, the Cossack problem was beginning to play its role as Rzeczpospolita's preeminent internal challenge of the 17th century. Conscious and planned colonization of the fertile, but underdeveloped region was pioneered in the 1580s and 1590s by the Ruthenian dukes of Volhynia. Of the Poles, only Jan Zamoyski, who penetrated the Brac?aw area, was economically active by the end of the 16th century. There and in the Kiev area Polish fortunes also began to develop, often through intermarriage with Ruthenian clans. In 1630, the great Ukrainian latifundia were dominated by Ruthenian families, such as the Ostrogski, Zbaraski and Zas?awski. At the outset of the great civil war of 1648, the Polish settlers comprised barely 10% of the middle and petty nobility, for example in the well-researched Brac?aw Voivodeship and Kiev Voivodeship. The early Cossack rebellions were, therefore, instants of social uprising, rather than national anti-Polish movements. As class warfare they were ruthlessly stamped out by the state, which would sometimes take their leaders to Warsaw for execution. Cossacks were first semi-nomadic, then also settled East Slavic people of the Dnieper River area, who practiced brigandage and plunder, and, renowned for their fighting prowess, early in their history assumed a military organization. Many of them were, or originated from run-away peasants from the eastern and other areas of the Commonwealth or from Russia; other significant elements were townspeople and even nobility, who came from the region or migrated into Ukraine. Cossacks considered themselves free and independent of any bondage and followed their own elected leaders, who originated from the more affluent strata of their society. There were tens of thousands of Cossacks already early in the 17th century. They had frequently clashed with the neighboring Turks and Tatars and raided their Black Sea coastal settlements. Such excursions, executed by formal subjects of the Polish king, were intolerable from the point of view of foreign relations of the Commonwealth, because they violated peace or interfered with the state's current policy toward the Ottoman Empire. During this earlier period of the Polish-Lithuanian Commonwealth, the separate Ukrainian national consciousness was being formed, influenced in part by the context and heroes of the Cossack uprisings. The legacy of Kievan Rus' was recognized, as was the heritage of the East Slavic Ruthenian language. Cossacks felt being members of the "Rus' Orthodox nation" (the Uniate Church was practically eliminated in the Dnieper region in 1633). But seeing themselves also as members of the (Polish) "Republic-Fatherland", they dealt with sejms and kings as its subjects. Cossacks and the Ruthenian nobility, until recently subjects of the Grand Duchy of Lithuania, were not formally or otherwise connected to the Tsardom of Russia. Many Cossacks were being hired to participate in wars waged by the Commonwealth. This status resulted in privileges and often constituted a form of social upward mobility; the Cossacks resented the periodic reductions in their enrollment. Cossack rebellions or uprisings typically assumed the form of huge plebeian social movements. The Ottoman Empire demanded a total liquidation of the Cossack power. The Commonwealth, however, needed the Cossacks in the south-east, where they provided an effective buffer against Crimean Tatars incursions. The other way to quell the Cossack unrest would be to grant the nobility status to a substantial portion of their population and thus assimilate them into the Commonwealth's power structure, which was what Cossacks aspired to. This solution was being rejected by the magnates and szlachta for political, economic and cultural reasons when there was still time for reform, before disasters struck. The Polish-Lithuanian establishment had instead shifted unsteadily between compromising with the Cossacks, allowing limited varying numbers, the so-called Cossack register (500 in 1582, 8000 in the 1630s), to serve with the Commonwealth army (the rest were to be converted into serfdom, to help the magnates in colonizing the Dnieper area), and brutally using military force in an attempt to subdue them. Oppressive efforts, often led by Poles, including Crown tenants or their Jewish plenipotents, Ruthenian nobles of the Commonwealth and even upper-rank Cossack officers, to subjugate and exploit economically the Cossack territories and population in Zaporizhia region, resulted in a series of Cossack uprisings, of which the early ones could have served as a warning for szlachta legislators. While Ukraine was undergoing substantial economic development, Cossacks and peasants were by and large not among the beneficiaries of the process. In 1591 the bloodily suppressed Kosi?ski Uprising was led by Krzysztof Kosi?ski. New fighting took place already in 1594, when the Nalyvaiko Uprising engulfed large portions of Ukraine and Belarus. Hetman Stanis?aw ?ó?kiewski defeated the Cossack units in 1596 and Severyn Nalyvaiko was executed. A temporary pacification of relations followed in the early 17th century, when the many wars fought by the Commonwealth necessitated greater involvement by registered Cossacks. But the Union of Brest resulted in new tensions, as the Cossacks had become dedicated adherents and defenders of the Eastern Orthodoxy. The Time of Troubles period in Russia resulted in peasant rebellions, such as the one led by Ivan Bolotnikov, which contributed also to peasant unrest in the Commonwealth and to further insurgency by the Cossacks there. The uprising of Marek Zhmaylo of 1625 was confronted by Stanis?aw Koniecpolski and concluded with Mykhailo Doroshenko signing the Treaty of Kurukove. More fighting soon erupted and culminated in the "Taras night" of 1630, when the Cossack rebels under Taras Fedorovych turned against army units and noble estates. The Fedorovych Uprising was put under control by Hetman Koniecpolski. These events were followed by an increase in the Cossack registry (Treaty of Pereyaslav), but then a rejection of demands by Cossack elders during the convocation sejm of 1632. Cossacks wanted to participate in free elections as members of the Commonwealth and have religious rights of the "disuniate" Eastern Christians restored. The 1635 sejm voted instead further restrictions and authorized the construction of the Dnieper Kodak Fortress, to facilitate more effective control over the Cossack territories. Another round of fighting, the Pavluk Uprising, followed in 1637-1638. It was defeated and its leader Pavel Mikhnovych executed. Upon new anti-Cossack limitations and sejm statutes imposing serfdom on most Cossacks, the Cossacks rose up again in 1638 under Jakiv Ostryanin and Dmytro Hunia. The uprising was cruelly suppressed and the existing Cossack land properties were taken over by the magnates. The Commonwealth's struggles with the Cossacks were being paid attention to at Moscow's Kremlin, which from the late 1620s began regarding Cossacks as a potent source of fundamental instability in the Polish-Lithuanian rival and neighbor. Russian efforts to destabilize the Polish Kingdom using Cossacks in the 1630s were not yet successful, even though the Cossack elders themselves often raised the possibility of a union with the Tsardom to pressure Poland's ruling elites. The borderlands with Russia had become a place of refuge for Cossacks persecuted after their failed uprisings; regiments of Russian-registered Cossacks, following the Commonwealth example, were eventually established there. The harsh measures restored relative calm for a decade, until 1648. Seen by the establishment as the "golden peace", for the Cossacks and peasants the period brought the worst oppression. During that time the private dukedoms of Ukrainian potentates, such as the families of Kalinowski, Dani?owicz and Wi?niowiecki, rapidly expanded and the folwark–serfdom economy, only then (much later than in other parts of the Polish Crown) being introduced in Ukraine, caused still unprecedented levels of exploitation. The Cossack affair, perceived as a weak spot of the Commonwealth, was increasingly becoming an issue in international politics. W?adys?aw IV Vasa, son of Sigismund III, ruled the Commonwealth during 1632-1648. Born and raised in Poland, prepared for the office from the early years, popular, educated, free of his father's religious prejudices, he seemed a promising chief executive candidate. W?adys?aw however, like his father, had the life ambition of attaining the Swedish throne by using his royal status and power in Poland and Lithuania, which, to serve his purpose, he attempted to strengthen. W?adys?aw ruled with the help of several prominent magnates, among them Jerzy Ossoli?ski, Chancellor of the Crown, Hetman Stanis?aw Koniecpolski, and Jakub Sobieski, the middle szlachta leader. W?adys?aw IV was unable to attract a wider szlachta following, and many of his plans had foundered because of lack of support in the increasingly ineffectual sejm. Because of his tolerance for non-Catholics, W?adys?aw was also opposed by the Catholic clergy and the papacy. Toward the last years of his reign W?adys?aw IV sought to enhance his position and assure his son's succession by waging a war on the Ottoman Empire, for which he prepared, despite the lack of nobility support. To secure this end the King worked on forming an alliance with the Cossacks, whom he encouraged to improve their military readiness and intended to use against the Turks, moving in that direction of cooperation further than his predecessors. The war never took place, and the King had to explain his offensive war designs during the "inquisition" sejm of 1646. W?adys?aw's son Zygmunt Kazimierz died in 1647, and the King, weakened, resigned and disappointed, in 1648. The turn of the 16th and 17th centuries brought changes that, for the time being, weakened the Commonwealth's powerful neighbors (The Tsardom of Russia, The Austrian Habsburg Monarchy and the Ottoman Empire). The resulting opportunity for the Polish-Lithuanian state to improve its position depended on its ability to overcome internal distractions, such as the isolationist and pacifist tendencies that prevailed among the szlachta ruling class, or the rivalry between nobility leaders and elected kings, often intent on circumventing restrictions on their authority, such as the Henrician Articles. The nearly continuous wars of the first three decades of the new century resulted in modernization, if not (because of the treasury limitations) enlargement, of the Commonwealth's army. The total military forces available ranged from a few thousands at the Battle of Kircholm, to the over fifty thousands plus pospolite ruszenie mobilized for the Khotyn (Chocim) campaign of 1621. The remarkable during the first half of the 17th century development of artillery resulted in the 1650 publication in Amsterdam of the Artis Magnae Artilleriae pars prima book by Kazimierz Siemienowicz, a pioneer also in the science of rocketry. Despite the superior quality of the Commonwealth's heavy (hussar) and light (Cossack) cavalry, the increasing proportions of the infantry (peasant, mercenary and Cossack formations) and of the contingent of foreign troops resulted in an army, in which these respective components were heavily represented. During the reigns of the first two Vasas a war fleet was developed and fought successful naval battles (1609 against Sweden). As usual, fiscal difficulties impaired the effectiveness of the military, and the treasury's ability to pay the soldiers. As a continuation of the earlier plans for an anti-Turkish offensive, that had not materialized because of the death of Stefan Batory, Jan Zamoyski intervened in Moldavia in 1595. With the backing of the Commonwealth army Ieremia Movil? assumed the hospodar's throne as the Commonwealth's vassal. Zamoyski's army repelled the subsequent assault by the Ottoman Empire forces at ?u?ora. The next confrontation in the area took place in 1600, when Zamoyski and Stanis?aw ?ó?kiewski acted against Michael the Brave, hospodar of Wallachia and Transylvania. First Ieremia Movil?, who in the meantime had been removed by Michael in Moldavia, was reimposed, and then Michael was defeated in Wallachia at the Battle of Bucov. Ieremia's brother Simion Movil? became the new hospodar there and for a brief period the entire region up to the Danube had become the Commonwealth's dependency. Turkey soon reasserted its role, in 1601 in Wallachia and in 1606 in Transylvania. Zamoyski's politics and actions, which constituted the earlier stage of the Moldavian magnate wars, only prolonged Poland's influence in Moldavia and interfered effectively with the simultaneous Habsburg plans and ambitions in this part of Europe. Further military involvement at the southern frontiers ceased being feasible, as the forces were needed more urgently in the north. Sigismund III's crowning in Sweden took place in 1594 amid tensions and instability caused by religious controversies. As Sigismund returned to Poland, his uncle Charles, the regent, took the lead of the anti-Sigismund Swedish opposition. In 1598 Sigismund attempted to resolve the matter militarily, but the expedition to the country of his origin was defeated at the Battle of Linköping; Sigismund was taken prisoner and had to agree to the harsh conditions imposed. After his return to Poland, in 1599 the Riksdag of the Estates deposed him in Sweden, and Charles led the Swedish forces into Estonia. Sigismund in 1600 proclaimed the incorporation of Estonia into the Commonwealth, which was tantamount to a declaration of war on Sweden, at the height of Rzeczpospolita's involvement in Moldavia region. Jürgen von Farensbach, given the command of the Commonwealth forces, was overpowered by the much larger army brought to the area by Charles, whose quick offensive resulted in the 1600 take-over of most of Livonia up to the Daugava River, except for Riga. The Swedes were welcomed by much of the local population, by that time increasingly dissatisfied with the Polish-Lithuanian rule. in 1601 Krzysztof Radziwi succeeded at the Battle of Kokenhausen, but the Swedish advances had been reversed up to (not including) Reval, only after Jan Zamoyski brought in a more substantial force. Much of this army, having been unpaid, returned to Poland. The clearing action was continued by Jan Karol Chodkiewicz, who, with a small contingent of troops left, defeated the Swedish incursion at Paide (Bia?y Kamie?) in 1604. In 1605 Charles, now Charles IX, the King of Sweden, launched a new offensive, but his efforts were crossed by Chodkiewicz's victories at Kircholm and elsewhere and the Polish naval successes, while the war continued without a decisive resolution being produced. In the armistice of 1611 the Commonwealth was able to keep the majority of the contested areas, as a variety of internal and foreign difficulties, including the inability to pay the mercenary soldiers and the Union's new involvement in Russia, precluded a comprehensive victory. After the deaths of Ivan IV and in 1598 of his son Feodor, the last tsars of the Rurik Dynasty, Russia entered a period of severe dynastic, economic and social crisis and instability. As Boris Godunov encountered resistance from both the peasant masses and the boyar opposition, in the Commonwealth the ideas of turning Russia into a subordinated ally, either through a union, or an imposition of a ruler dependent on the Polish-Lithuanian establishment, were rapidly coming into play. In 1600 Lew Sapieha led a Commonwealth mission to Moscow to propose a union with the Russian state, patterned after the Polish-Lithuanian Union, with the boyars granted rights comparable with those of the Commonwealth's nobility. A decision on a single monarch was to be postponed until the death of the current king or tsar. Boris Godunov, at that time also engaged in negotiations with Charles of Sweden, wasn't interested in that close a relationship and only a twenty-year truce was agreed upon in 1602. In order to continue their efforts, the magnates took advantage of the earlier death of Tsarevich Dmitry (1591) under mysterious circumstances and of the appearance of False Dmitriy I, a pretender-impostor claiming to be the tsarevich. False Dmitriy was able to secure the cooperation and help of the Wi?niowiecki family and of Jerzy Mniszech, Voivode of Sandomierz, whom he promised vast Russian estates and a marriage with the voivode's daughter Marina. Dmitriy became a Catholic and leading an army of adventurers raised in the Commonwealth, with the tacit support of Sigismund III entered in 1604 the Russian state. After the death of Boris Godunov and the murder of his son Feodor, False Dmitriy I became the Tsar of Russia, and remained in that capacity until killed during a popular turmoil in 1606, which also eliminated the Polish presence in Moscow. Russia under the new tsar Vasili Shuysky remained unstable. A new false Dmitriy materialized and Tsaritsa Marina had even "recognized" in him her thought-to-be-dead husband. With a new army provided largely by the magnates of the Commonwealth, False Dmitriy II approached Moscow and made futile attempts to take the city. Tsar Vasili IV, seeking help from King Charles IX of Sweden, agreed to territorial concessions in Sweden's favor and in 1609 the Russo-Swedish anti-Dmitriy and anti-Commonwealth alliance was able to remove the threat from Moscow and strengthen Vasili. The alliance and the Swedish involvement in Russian affairs caused a direct military intervention on the part of the Polish-Lithuanian Commonwealth, instigated and led by King Sigismund III, with the support of the Roman Curia. The Polish army commenced a siege of Smolensk and the Russo-Swedish relief expedition was defeated in 1610 by Hetman ?ó?kiewski at the Battle of Klushino. The victory strengthened the position of the compromise-oriented faction of Russian boyars, which had already been interested in offering the Moscow throne to W?adys?aw Vasa, son of Sigismund III. Fyodor Nikitich Romanov, the Patriarch of Moscow, was one of the leaders of the boyars. Under arrangements negotiated by ?ó?kiewski, the boyars deposed Tsar Vasili and accepted W?adys?aw in return for peace, no annexation of Russia into the Commonwealth, the Prince's conversion to the Orthodox religion, and privileges, including exclusive rights to high offices in the Tsardom granted to the Russian nobility. After the agreement was signed and W?adys?aw declared tsar, the Commonwealth forces entered the Kremlin (1610). Sigismund III subsequently rejected the compromise solution and demanded the tsar's throne for himself, which would mean complete subjugation of Russia, and as such was rejected by the bulk of the Russian society. Sigismund's refusal and demands only intensified the chaos, as the Swedes proposed their own candidate and took over Veliki Novgorod. The result of this situation and of the ruthless Commonwealth occupation in Moscow and elsewhere in Russia was the 1611 popular Russian anti-Polish uprising, heavy fighting in Moscow and a siege of the Polish garrison occupying the Kremlin.[c] In the meantime, the Commonwealth forces after a long siege stormed and took Smolensk in 1611. At the Kremlin the situation of the Poles had been worsening despite occasional reinforcements, and the massive national and religious uprising was spreading all over Russia. Prince Dmitry Pozharsky and Kuzma Minin effectively led the Russians, a new rescue operation attempted by Hetman Chodkiewicz had failed and a capitulation of the Polish and Lithuanian forces at the Kremlin terminated in 1612 their involvement there. Mikhail Romanov, son of the imprisoned in Poland (since his rejection of Sigismund III's demand for the Russian throne) Patriarch Filaret, became the new tsar in 1613. The war effort, debilitated by a rebellious confederation established by the unpaid military, was continued. Turkey, threatened by the Polish territorial gains became involved at the frontiers, and a peace between Russia and Sweden was agreed to in 1617. Fearing the new alliance the Commonwealth undertook one more major expedition, which took over Vyazma and arrived at the walls of Moscow, in an attempt to impose the rule of W?adys?aw Vasa again. The city would not open its gates and not enough military strength was brought in to attempt a forced take-over. Despite the disappointment, the Commonwealth was able to take advantage of the Russian weakness and through the territorial advances accomplished to reverse the eastern losses suffered in the earlier decades. In the Truce of Deulino of 1619 the Rzeczpospolita was granted the Smolensk, Chernihiv and Novhorod-Siverskyi regions. The Polish-Lithuanian Commonwealth attained its greatest geographic extent, but the attempted union with Russia could not have been achieved, as the systemic, cultural and religious incompatibilities between the two empires proved to be insurmountable. The territorial annexations and the ruthlessly conducted wars left a legacy of injustice suffered and desire for revenge on the part of the Russian ruling classes and people. The huge military effort weakened the Commonwealth and the painful consequences of the adventurous policies of the Vasa court and its allied magnates were soon to be felt. In 1613 Sigismund III Vasa reached an understanding with Matthias, Holy Roman Emperor, based on which both sides agreed to cooperate and mutually provide assistance in suppressing internal rebellions. The pact neutralized the Habsburg Monarchy in regard to the Commonwealth's war with Russia, but had resulted in more serious consequences after the Bohemian Revolt gave rise to the Thirty Years' War in 1618. The Czech events weakened the position of the Habsburgs in Silesia, where there were large concentrations of ethnically Polish inhabitants, whose ties and interests at that time placed them within the Protestant camp. Numerous Polish Lutheran parishes, with schools and centers of cultural activity, had been established in the heavily Polish areas around Opole and Cieszyn in eastern Silesia, as well as in numerous cities and towns throughout the region and beyond, including Breslau (Wroc?aw) and Grünberg (Zielona Góra). The threat posed by a potentially resurgent Habsburg monarchy to the situation of Polish Silesians was keenly felt, and there were voices within King Sigismund's circle, including Stanis?aw ?ubie?ski and Jerzy Zbaraski, who brought to his attention Poland's historic rights and options in the area. The King, an ardent Catholic, advised by many not to involve the Commonwealth on the Catholic-Habsburg side, decided in the end to act in their support, but unofficially. The ten thousand men strong Lisowczycy mercenary division, a highly effective military force, had just returned from the Moscow campaign, and having become a major nuisance for the szlachta, was available for another assignment abroad; Sigismund sent them south to assist Emperor Ferdinand II. Sigismund court's intervention greatly influenced the first phase of the war, helping save the position of the Habsburg Monarchy at a critical moment. The Lisowczycy entered northern Hungary (now Slovakia) and in 1619 defeated the Transylvanian forces at the Battle of Humenné. Prince Bethlen Gábor of Transylvania, who together with the Czechs had laid siege to Vienna, had to hurry back to his country and make peace with Ferdinand, which seriously compromised the situation of the Czech insurgents, crushed in the course and in the aftermath of the Battle of White Mountain. Afterwards the Lisowczycy ruthlessly fought to suppress the Emperor's opponents in Glatz (K?odzko) region and elsewhere in Silesia, in Bohemia and Germany. After the breakdown of the Bohemian Revolt the residents of Silesia, including the Polish gentry in Upper Silesia, were subjected to severe repressions and Counter-Reformational activities, including forced expulsions of thousands of Silesians, many of whom ended up in Poland. Later during the war years the province was repeatedly ravaged in the course of military campaigns crossing its territory, and at one point a Protestant leader, Piast Duke John Christian of Brieg, appealed to W?adys?aw IV Vasa for assuming supremacy over Silesia. King W?adys?aw, although a tolerant ruler including in matters of religion, was like his father disinclined to involve the Commonwealth in the Thirty Years' War. He ended up getting as fiefs from the Emperor the duchies of Opole and Racibórz in 1646, twenty years later reclaimed by the Empire. The Peace of Westphalia allowed the Habsburgs to do as they pleased in Silesia, already completely ruined by the war, which had resulted in intense persecution of Protestants, including the Polish Lower Silesia communities, forced to emigrate or subjected to Germanization. Although the Rzeczpospolita had not formally participated directly in the Thirty Years' War, the alliance with the Habsburg Monarchy contributed to getting Poland involved in new wars with the Ottoman Empire, Sweden and Russia, and therefore led to significant Commonwealth influence over the course of the Thirty Years' War. The Polish-Lithuanian Commonwealth also had its own intrinsic reasons for the continuation of struggles with the above powers. From the 16th century the Commonwealth suffered a series of Tatar invasions. In the 16th century Cossack raids began descending on the Black Sea area Turkish settlements and Tatar lands. In retaliation the Ottoman Empire directed their vassal Tatar forces, based in Crimea or Budjak areas, against the Commonwealth regions of Podolia and Red Ruthenia. The borderland area to the south-east was in a state of semi-permanent warfare until the 18th century. Some researchers estimate that altogether more than 3 million people had been captured and enslaved during the time of the Crimean Khanate. The greatest intensity of Cossack raids, reaching as far as Sinop in Turkey, fell on the 1613-1620 period. The Ukrainian magnates on their part continued their traditional involvement in Moldavia, where they kept trying to install their relatives (the Movile?ti family) on the hospodar's throne (Stefan Potocki in 1607 and 1612, Samuel Korecki and Micha? Wi?niowiecki in 1615). Ottoman chief Iskender Pasha destroyed the magnate forces in Moldavia and compelled Stanis?aw ?ó?kiewski in 1617 to consent to the Treaty of Busza at Poland's border, in which the Commonwealth obliged not to get involved in matters concerning Wallachia and Transylvania. Turkish unease about Poland's influence in Russia, the consequences of the Lisowczycy expedition against Transylvania, an Ottoman fief in 1619 and the burning of Varna by the Cossacks in 1620 caused the Empire under the young Sultan Osman II to declare a war against the Commonwealth, with the aim of breaking and conquering the Polish-Lithuanian state. The actual hostilities, which were to bring the demise of Stanis?aw ?ó?kiewski, were initiated by the old Polish hetman. ?ó?kiewski with Koniecpolski and a rather small force entered Moldavia, hoping for military reinforcements from Moldavian Hospodar Gaspar Graziani and the Cossacks. The aid had not materialized and the hetmans faced a superior Turkish and Tatar force led by Iskender Pasha. In the aftermath of the failed Battle of ?u?ora (1620) ?ó?kiewski was killed, Koniecpolski captured, and the Commonwealth left opened defenseless, but disagreements between the Turkish and Tatar commanders prevented the Ottoman army from immediately waging an effective follow-up. The Sejm was convened in Warsaw, the royal court was blamed for endangering the country, but high taxes for a sixty thousand men army were agreed to and the number of registered Cossacks was allowed to reach forty thousand. The Commonwealth forces, led by Jan Karol Chodkiewicz, were helped by Petro Konashevych-Sahaidachny and his Cossacks, who raised against the Turks and Tatars and participated in the upcoming campaign. In practice about 30,000 regular army and 25,000 Cossacks faced at Khotyn a much larger Ottoman force under Osman II. Fierce Turkish attacks against the fortified Commonwealth positions lasted throughout September 1621 and were repelled. The exhaustion and depletion of its forces made the Ottoman Empire sign the Treaty of Khotyn, which had kept the old territorial status quo of Sigismund II (Dniester River border between the Commonwealth and Ottoman combatants), a favorable for the Polish side outcome. After Osman II was killed in a coup, ratification of the treaty was obtained from his successor Mustafa I. In response to further Cossack attacks Tatar incursions continued as well, in 1623 and 1624 reaching almost as far west as the Vistula, with the attendant plunder and taking of captives. More effective defense was put together by the freed Koniecpolski and Stefan Chmielecki, who defeated the Tatars on several occasions between 1624 and 1633, using the quarter army supported by the Cossacks and general population. More warfare with the Ottomans took place in 1633-1634 and ended with a peace treaty. In 1644 Koniecpolski defeated Tugay Bey's army at Okhmativ and before his death planned an invasion against the Crimean Khanate. King W?adys?aw IV's ideas of a grand international war-crusade against the Ottoman Empire were thwarted by the inquisition sejm in 1646. The state's inability to control the activities of the magnates and the Cossacks had contributed to the semi-permanent instability and danger at the Commonwealth's south-eastern frontiers. More acute threat to the Polish-Lithuanian state came from Sweden. The balance of power in the north had shifted in Sweden's favor, as the Baltic neighbor was led by King Gustavus Adolphus, a highly able and aggressive military leader, who greatly improved the effectiveness of the Swedish armed forces, while also taking advantage of Protestant zealotry. The Commonwealth, exhausted by the wars with Russia and the Ottoman Empire and lacking allies, was poorly prepared to face this new challenge. Continuous diplomatic maneuvering by Sigismund III made the whole situation look to szlachta like another stage in the King's Swedish dynastic affairs; in reality the Swedish power resolved to take hold of the entire Polish-controlled Baltic coast, and thereby profit from the Commonwealth's maritime trade intermediary control, endangering its basis for independent existence. Gustavus Adolphus chose to attack Riga, the Grand Duchy's foremost trade center, in late August 1621, just as the Ottoman army was approaching Khotyn, tying-up the Polish forces there. The city, stormed several times, had to surrender a month later. Moving inland to the south the Swedes next entered Courland. With Riga the Commonwealth lost the most important Baltic seaport in the region and an entry to northern Livonia, the Daugava River crossing. The 1622 Truce of Mitawa gave Poland the possession of Courland and eastern Livonia, but the Swedes were to take over most of Livonia north of the Daugava. The Lithuanian forces were able to keep Dyneburg, but suffered a heavy defeat at the Battle of Wallhof. The losses impacted severely the trade and customs income of the Great Duchy of Lithuania. The Crown lands were to be also affected, as in July 1626 the Swedes took Pillau and forced Duke George William, Elector of Brandenburg and vassal of the Commonwealth in the attacked Ducal Prussia, to assume a neutrality stance. The Swedish advance resulted in the take-over of the Baltic coastline up to Puck. Danzig (Gda?sk), which had remained loyal to the Commonwealth, was subjected to a naval blockade. The Poles, completely surprised by the Swedish invasion, in September attempted a counter-offensive, but were defeated by Gustavus Adolphus at the Battle of Gniew. The forces required serious modernization. The Sejm passed high taxation for the defense, but collections lagged behind. The situation was partially saved by the City of Danzig, which hurriedly embarked on the construction of modern fortifications, and by Hetman Stanis?aw Koniecpolski. The accomplished commander of the eastern borderlands fighting quickly learned the maritime affairs and contemporary methods of European warfare. Koniecpolski promoted the necessary enlargement of the naval fleet, modernization of the army, and became a fitting counterbalance for the military abilities of Gustavus Adolphus. Koniecpolski led a spring 1627 military campaign, trying to keep the Swedish army in the Duchy of Prussia from moving toward Danzig, while also intending to block their reinforcements arriving from the Holy Roman Empire. Moving quickly the Hetman recovered Puck, and then destroyed at the Battle of Czarne (Hammerstein) the forces intended for Gustavus. The Swedes themselves Koniecpolski's forces kept near Tczew, shielding the access to Danzig and preventing Gustavus Adolphus from reaching his main objective. At the Battle of Oliva the Polish ships defeated a Swedish naval squadron. Danzig was saved, but the next year the strengthened in the Ducal Prussia Swedish army took Brodnica, and early in 1629 defeated the Polish units at Górzno. Gustavus Adolphus from his Baltic coast position laid an economic siege against the Commonwealth and ravaged what he had conquered. At this point allied forces under Albrecht von Wallenstein were brought in to help keep the Swedes in check. Forced by the combined Polish-Austrian action Gustavus had to withdraw from Kwidzyn to Malbork, in process being defeated and almost taken prisoner by Koniecpolski at the Battle of Trzciana. But in addition to being militarily exhausted, the Commonwealth was now pressured by several European diplomacies to suspend further military activities, to allow Gustavus Adolphus to intervene in the Holy Roman Empire. The Truce of Altmark left Livonia north of the Daugava and all Prussian and Livonian seaports except for Danzig, Puck, Königsberg, and Libau in hands of the Swedes, who were also allowed to charge duty on trade through Danzig. As W?adys?aw IV was assuming the Commonwealth crown, Gustavus Adolphus, who had been working on organizing an anti-Polish coalition including Sweden, Russia, Transylvania and Turkey, died. The Russians then undertook an action of their own, attempting to recover lands lost in the Truce of Deulino. In the fall of 1632 a well-prepared Russian army took a number of strongholds on the Lithuanian side of the border and commenced a siege of Smolensk. The well-fortified city was able to withstand a general onslaught followed by a ten-month encirclement by an overwhelming force led by Mikhail Shein. At that time a Commonwealth rescue expedition of comparable strength arrived, under the highly effective military command of W?adys?aw IV. After months of fierce fighting, in February 1634 Shein capitulated. The Treaty of Polyanovka confirmed the Deulino territorial arrangements with small adjustments in favor of the Tsardom. W?adys?aw had relinquished, upon monetary compensation, his claims to the Russian throne. Having secured the eastern front, the King was able to concentrate on the recovery of Baltic areas lost by his father to Sweden. W?adys?aw IV wanted to take advantage of the Swedish defeat at Nördlingen and fight for both the territories and his Swedish dynastic claims. The Poles were suspicious of his designs and war preparations and the King was able to proceed with negotiations only, where his unwillingness to give up the dynastic claim weakened the Commonwealth's position. According to the Treaty of Stuhmsdorf of 1635 the Swedes evacuated Royal Prussia's cities and ports, which meant a return of the Crown's lower Vistula possessions, and stopped collecting custom duties there. Sweden retained most of Livonia, while the Rzeczpospolita kept Courland, which having assumed the servicing of Lithuania's Baltic trade entered a period of prosperity. The position of the Commonwealth with respect to the Duchy of Prussia kept getting weaker, as the power in the Duchy was being taken over by the Electors of Brandenburg. Under the electors, the Duchy had become ever more closely linked to Brandenburg, which was harmful to the political interests of the Commonwealth. Sigismung III left the Duchy's administration in the hands of Joachim Frederick, and then John Sigismund, who in 1611 acquired the right to Hohenzollern succession in the Duchy by the consent of the King and the Sejm. He actually became the Duke of Prussia in 1618, after the death of Albert Frederick, and was followed by George William and then Frederick William, who in 1641 in Warsaw for the last time paid a Prussian homage to a Polish king. The successive Brandenburg dukes would make nominal concessions, to satisfy the Commonwealth's expediencies and justify the granting of privileges, but an irreversible shift in relations was taking place. In 1637 Bogislaw XIV, Duke of Pomerania died. He was the last of the Slavic Griffins Dynasty of the Duchy of Pomerania. Sweden acquired the Pomeranian rule, while the Commonwealth was only able to get back its fiefs, Bytów Land and L?bork Land. S?upsk Land was also sought by W?adys?aw IV at the peace conference, but it ended up a part of Brandenburg, which after the Peace of Westphalia controlled all of Pomerania adjacent to the border of the Commonwealth, extending south to where it met with Habsburg lands. Portions of Pomerania were populated by the Slavic Kashubians and Slovincians. The Thirty Years' War period brought the Commonwealth a mixed legacy, rather more losses than gains, with the Polish-Lithuanian state retaining its status as one of the few great powers in central-eastern Europe. From 1635 the country enjoyed a period of peace, during which internal bickering and progressively dysfunctional legislative processes prevented any substantial reforms from taking place. The Commonwealth was unprepared to deal with grave challenges that materialized in the middle of the century. a.^ Historian Daniel Beauvois dismisses the notion of the democracy of nobles in the Commonwealth as having no basis in reality. He sees an oligarchy of propertied upper nobility, who discriminated against and took advantage of everybody else, including the vast majority of the nobility class (szlachta). b.^ According to Daniel Beauvois, the Union of Brest, established to get rid of the Eastern Orthodoxy on Polish-ruled lands, was the tool for the oppression of the Ruthenian population and the root cause of the Ruthenian (Ukrainian) enmity toward the Poles, which has since continued throughout history. c.^ Contemporary accounts report widespread killing, acts of cruelty and abuse committed by the forces of the Polish-Lithuanian Commonwealth in Russia. Atrocities were commonly practiced by both sides, but the military offensives were undertaken by the Poles, who dealt with the local civilian population. Aleksander Gosiewski, the first commandant of the Polish garrison at the Kremlin in 1610, vainly tried to curb his subordinates' misbehavior by imposing harsh penalties in turn on them. Hetman Stanis?aw ?ó?kiewski wrote of a great slaughter in Moscow, "as on the Day of Judgement", clearly sympathizing with the untold loss and the plight of the extensive, prosperous and affluent Russian capital, burning and wasting in an enormous bloodshed. Gosiewski ordered the use of fire to expel the Russian opponents; the fires caused the death of 6,000 - 7,000 people in Moscow. Gosiewski ordered the deposed Tsar Shuysky and his brothers to be deported to Poland and had Hermogenes imprisoned, after the patriarch (successfully) called for a rising against the Poles and their supporters.
Students will be able to - interpret and make inferences about fluctuations in mussel populations from actual data - analyze the effects of human use and habitat changes on a mussel population - analyze the effects of price fluctuations on mussel Students graph and interpret actual Mississippi River mussel harvest data in relation to historical river events or price changes. Data gathered about a wildlife population in a similar manner over a period of time may be useful in detecting trends in that population. The same data may be interpreted by those analyzing it in a variety of ways. Because a mussel population is influenced by many factors, it may be difficult to measure the effect of a single factor. Thus, assumptions must often be made that factors other the ones being measured are not significantly affecting When measuring populations of mussels, biologists are seldom able to get a total count. Ideally, biologists would like to have a total count of mussel populations for the period of time they are interested in. However, usually only a sample of the population can be obtained and inferences about the total population must be made from this sample. Errors or inconsistencies in gathering the data over time may greatly influence the accuracy of the data. Despite the influence of unknown factors and possible inconsistencies in data gathering, regularly conducted counts or inventories of a population may still be the best information available and decisions must be made from this data. Unfortunately, there were very few surveys of mussel populations in the past and sampling mussels is a very expensive and time consuming endeavor. Therefore, it is often necessary for biologists to rely on other types of data to analyze trends in mussel populations. Because of the economic importance of mussels, records of tons harvested as well as the price paid have been kept. - graph paper (or prepared graphs) - mussel harvest data (Table - price data (Table |Note: Examples of graphs are provided. However, you may choose to have students make their own graphs - Provide students with the mussel harvest data only Have them graph data from 1894 to 1986. Students should put a legend on their graph. You may want to make an overhead or copy of the graph provided 1) for students to check against and for class discussion. A bar chart is more appropriate because the data is not continuous. - When they have completed graphing their mussel harvest, have them divide into groups and give each group a copy of the time line. - Have each group look for correlations between historic events and changes in harvest. Comparison of historic harvest and price/ ton (Students may use mussel harvest graph from previous - Hand out price per ton data (Table - Next they need to add a second Y axis for price/ton or they may draw a second graph using data on this page to compare the harvest data to (Figure - Have them look for correlations between price and - In what year were the greatest tons of mussels - Look at the time line of historic events. What were most of the shells used for during the year of greatest mussel harvest? Pearl Buttons. - What happened between 1900 and 1910 that may partially explain the drastic decline of mussels harvested? Six-foot channel project (dredging changed habitat and covered up mussel beds; construction of additional closing dams closed off flow into side channels). - What two decades had the lowest harvest? 1940 - What explanations are there for these low harvests? The main reasons were the invention of plastic and its use in button making and construction of the locks and dams that dramatically changed habitats on the River. However, pollution and its on mussels also contributed to lower mussel populations. - Why did harvest of mussels increase in the 1960s? Development of market for mussels shells to be used as nuclei (seeds) for cultured pearls. - What harvest technique resulted in a greater harvest in 1966? Scuba diving. The color and strength of the shell were important to the button industry. Current clammers choose species based on the thickness of the shells. Do some background research to determine which species of mussels have the thickest shells and see if they are harvested for use in the cultured pearl industry. Mussel Market Mystery - Part 2 Students graph actual Mississippi River mussel harvest data from 1986—1997. Students interpret relationships between harvest levels and price per pound. The market for freshwater mussel shells is the same as it is for any business, supply and demand are interdependent. For example, if the price for mussel shells (the product) is high, then the demand for the shell is great and many people are harvesting them. However, if the number of shells harvested (supply) is great, then the price of shell usually drops. Because of the demand for freshwater mussels, a size limit was placed on commercial freshwater mussels in based on the management concept of sustainable yield. Sustainable yield management of mussel populations is based on the assumption that mussel size limits are sufficient to protect enough adults to reproduce numbers equal to what is being harvested. The data provided in this activity was looked at by biologists as one method of monitoring the populations of washboard and threeridge mussels to determine if over-harvesting of their populations was occurring. What they saw was decreasing harvest of washboards even though the price per pound continued to rise. Additionally, when they noticed that the tons harvested of threeridge continued to rise, they theorized that this was due to clammers switching from the more preferred washboard to the less sought-after threeridge because a decline in washboard populations. This indicated to biologists that the size limit set may have been too low to protect the washboard A detailed biological survey was conducted in the mid-1990s which verified the biologists' concern, the size limit was inadequate to maintain sustainable yield. Over-harvest was having a severe impact on the populations of washboard mussels along Wisconsin's portion of the Mississippi River (Figure 6). The decline was so severe in some areas that surveys were documenting more endangered Higgins' eye pearlymussels than the commercial washboard. The season for washboard mussels in Wisconsin's portion of the Mississippi was closed based on the market data presented in this activity and the biological - graph paper (or prepared graphs) - Table 3 |Note: Examples of graphs are provided. However, you may choose to have students make their own graphs. Using the information provided in Table 3, have the students do a comparison of harvest and price for two species commercially harvested from 1986-1997. - In this activity students can make several different types - They may want to make two, two-line graphs, one with a comparison of the number of washboard vs. threeridges harvested (Figure 2) and one comparing the price/pound of threeridge vs. washboard (Figure - Or they may want to draw two graphs with two Y axis each, comparing the price/pound and the number of mussels harvested. will end up with a graph showing the cost/pound vs. the amount harvested for each species 4 and 5). - Have students compare how many of each species is harvested, and then compare those numbers to the price/pound. - Use the background materials to discuss the concept of sustainable yield, importance of size limits and the survey conducted to document why current regulations were not adequate - When the harvest of washboard mussels was at its greatest, the price per pound was at its (lowest/highest)? Lowest. - In general, price/pound increases when there is a (lower/higher) demand for mussels than supply. Higher. - Why do biologists believe the harvest of washboard mussels was decreasing from the late 1980s to late even though the price per pound was getting higher? Washboard mussel populations were being over-harvested to the point that there were fewer and fewer mussels of legal size and fewer adults in the population. - What may have caused a decline in the harvest of both species beginning in 1996? By 1996 many areas of the Mississippi River were heavily infested by zebra which attached themselves to native mussels and made clamming much more difficult. The clammers could not easily identify species underwater and had to spend a lot of time cleaning off the zebra mussels before taking the shells
« ΠροηγούμενηΣυνέχεια » If two sides of a triangle a b c be equal to each other, that is, ac = *b, the angles which are offiosite to those equal sides, will also be equal to each other ; viz. a = 6. For let the triangle a b c be divided into two triangles a cd, d c b, by making the angle a c d = dc b (by postulate 4.) then because a c=b c, and c d common, (by the last) the triangle a doc=d c b : and therefore the angle a = b. Q. E. D. Cor. Hence if from any point in a perpendicular which bisects a given line, there be drawn right lines to the extremities of the given one, they with it will form an isoceles triangle. PL. 1. fig. 25. The § BCD at the centre of a circle ABED, is double the angle BMD at the circumference, standing upon the same arc B.ED Through the point A, and the centre C, draw the line ACE: then the angle ECD = CAD, + CDA ; (by theo. 4.) but since AC = CD being radii of the same circle, it is plain (by the preceding lemma) that the angles subtended by them will be also equal, and that their sum is double to either of them, that is, DAC + ADC is double to CAD, and therefore EUD is double to CAID, after the same manner BCE is double to CAB, wherefore, BCE + ECD, or BCD is double to HAC + CAD or to BAD. Q. E. D. Cor. 1. Hence an angle at the circumference is measured by half the arc it subtends or standson, Cor. 2. Hence all angles at the circumference of a circle which stands on the same chord as AB, are equal to each other, for they are all measured by half the arc they stand on, viz. by half the arc A.B. Cor. 3. Hence an angle in a segment greater than a semicircle is less than a right angle; thus ADB is measured by half the arc AB, but as the arc AB is less than a semicircle, therefore half the arc AB, or the angle ADB is less than half a semicircle, and consequently less than a right angle. Cor. 4. An angle in a segment less than a semicircle, is greater than a right angle, for since the arc AEC is greater than a semicircle, its half, which is the measure of the angle ABC, must be greater than half a semicircle, that is, greater than a right angle. Cor. 5. An angle in a semicircle is a right angle, for the measure of the angle ABD, is half of a semicircle AED, and therefore a right angle. If from the centre C of a circle ABE, there be let fall the fiershendicular CD on the chord AB, it will bisect it in the floint D. Let the lines AC and CB be drawn from the centre to the extremities of the chord, then since CA=CB, the angles CAB=CBA (by the lemma.) But the triangles ADC, BDC are right angled ones, since the line CD is a perpendicular; and so the angle ACD = DCB ; (by cor. 2. theo. 5.) then have we AC,CD, and the angle ACD in one tri angle; severally equal to CB, CD, and the angle BCD in the other: therefore (by theo. 6.) AD = D.B. Q. E. D. Cor. Hence it follows, that any line bisecting a chord at right angles, is a diameter; for a line drawn from the centre perpendicular to a chord, bisects that chord at right angles; therefore, conversely, a line bisecting a chord at right angles must pass through the centre, and consequently be a diameter. If from the centre of a circle ABE there be drawn a fiershendicular CD on the chord AB, and firoduced till it meets the circle in F, that line CF, will bisect the arc AB in the foint F. Let the lines AF'and BF be drawn, then in the triangles ADF, BDF; AD= BD (by the last;) DFis common, and the angle ADF= BDF being both right, for CD or EF is a perpendicular. Therefore (by theo. 6.) AF= FB; but in the same circle, equal lines are chords of equal arcs, since they measure them (by def. 19.): whence the arc AF-FB, and so AFB is bisected in F. by the line CF. * Cor. Hence the sine of an arc is half the chord of twice that arc. For AD is the sine of the arc AF (by def. 22.) A F is half the arc, and AD half the chord A.B (by theo. 8.) therefore the corollary is plain. In any triangle ABD, the half of each side is the sine of the ofthosite angle. Let the circle ADB be drawn through the points A, B, D; then the angle DAB is measured by half the arc BKD, (by cor 1. theo. 7.) viz. the arc BK is the measure of the angle BAD : therefore (by cor. to the last) BE the half of BD is the sine of BAD: the same way may be proved that half of AD is the sine of ABD, and the half of AB the sine of ADB. Q. E. D. If a right line GH cut two other right lines AB, CD, so as to make the alternate angles MEF, EFD equal to each other, then the dines AB and CD will be farallel, _ _ If it be denied that AB is parallel to CD, let " IK be parallel to it; then IEF=(EFD)=AEF (by part 2. theo. 3) a greater to a less, which is absurd, whence IK is not parallel; and the like we can prove of all other lines but AB; therefore AB is parallel to CD. Q. E. D. of two equal and farallel lines AB, CD, be joined by two other lines AD, BC, those shall be also equal and fiarallel. Let the diameter or diagonal BD be drawn, and we will have the triangles ABD, CB D : whereof AB in one is=to CD in the other, B.D common to both,and the angle ABD=CDB (by part 2, theo. 3. ;) therefore (by theo. 6.) AD=CB, and the angle CBD=ADB, and thence the lines AD and BC are parallel, by the preceding theorem. Cor. 1. Hence the quadrilateral figure ABCD is # parallelogram, and the diagonal BD bisects the o same, inasmuch as the triangle ABD=BCD, as now proved. Cor. 2. Hence also the triangle ABD on the same base AB, and between the same parallels with the parallelogram ABCD, is half the paral-lelogram. Cor. 3. It is hence also plain, that the opposite sides of a parallelogram are equal; for it has been proved that ABCD being a parallelogram, AB will be = CD and AD = BC. - r ...All harallelograms on the same or equal bases and between the same parallels, are equal to one another, that is, if BD = GH, and the lines ## .AF farallel, then the farallelogram. ABDC = BDFE= FHG. For AC=BD=EF(by cor, the last;) to both add CE, then AE=CF, In the triangles A.B.E, CDF; AB = CD and AE = CF and the angle BAE=DCF (by part 3. theo. 3.;) therefore the triangle ABE=CDF (by theo. 6) let the triangle CKE be taken from both, and we will have the trapezium ABKC=F(DFE; to each of these add the triangle BKD, then the parallelogram ABCD=E DEF; in like manner we may prove the parallelogram EFGH-B DEF. Wherefore ABDC=BDEF=EFGH. Q. E. D. Cor. Hence it is plain that triangles on the same or equal bases, and between the same parallels, are equal, seeing (by cor. 2. theo.12.) they are the halves of their respective parallelogram.
A widely accepted theory of planet formation, the so-called planetesimal hypotheses, the Chamberlin–Moulton planetesimal hypothesis and that of Viktor Safronov, states that planets form out of cosmic dust grains that collide and stick to form larger and larger bodies. When the bodies reach sizes of approximately one kilometer, then they can attract each other directly through their mutual gravity, enormously aiding further growth into moon-sized protoplanets. This is how planetesimals are often defined. Bodies that are smaller than planetesimals must rely on Brownian motion or turbulent motions in the gas to cause the collisions that can lead to sticking. Alternatively, planetesimals may form in a very dense layer of dust grains that undergoes a collective gravitational instability in the mid-plane of a protoplanetary disk. Many planetesimals eventually break apart during violent collisions, as may have happened to 4 Vesta and 90 Antiope, but a few of the largest planetesimals may survive such encounters and continue to grow into protoplanets and later planets. It is generally believed that about 3.8 billion years ago, after a period known as the Late Heavy Bombardment, most of the planetesimals within the Solar System had either been ejected from the Solar System entirely, into distant eccentric orbits such as the Oort cloud, or had collided with larger objects due to the regular gravitational nudges from the giant planets (particularly Jupiter and Neptune). A few planetesimals may have been captured as moons, such as Phobos and Deimos (the moons of Mars), and many of the small high-inclination moons of the giant planets. Planetesimals that have survived to the current day are valuable to scientists because they contain information about the formation of the Solar System. Although their exteriors are subjected to intense solar radiation that can alter their chemistry, their interiors contain pristine material essentially untouched since the planetesimal was formed. This makes each planetesimal a 'time capsule', and their composition can tell us of the conditions in the Solar Nebula from which our planetary system was formed (see also meteorites and comets). Definition of planetesimal The word planetesimal comes from the mathematical concept infinitesimal and literally means an ultimately small fraction of a planet. While the name is always applied to small bodies during the process of planet formation, some scientists also use the term planetesimal as a general term to refer to many small Solar System bodies – such as asteroids and comets – which are left over from the formation process. A group of the world's leading planet formation experts decided on a conference in 2006 on the following definition of a planetesimal: A planetesimal is a solid object arising during the accumulation of planets whose internal strength is dominated by self-gravity and whose orbital dynamics is not significantly affected by gas drag. This corresponds to objects larger than approximately 1 km in the solar nebula. Bodies large enough not only to keep together by gravitation but to change the path of approaching rocks over distances of several radii start to grow faster. These bodies, larger than 100 km to 1000 km, are called embryos or protoplanets. In the current Solar System, these small bodies are usually also classified by dynamics and composition, and may have subsequently evolved to become comets, Kuiper belt objects or trojan asteroids, for example. In other words, some planetesimals became other populations once planetary formation had finished, and may be referred to by either or both names. The above definition is not endorsed by the International Astronomical Union, and other working groups may choose to adopt the same or a different definition. There is also no exact dividing line between a planetesimal and protoplanet. Notes and references - Harrington, J.D.; Villard, Ray (24 April 2014). "RELEASE 14-114 Astronomical Forensics Uncover Planetary Disks in NASA's Hubble Archive". NASA. Archived from the original on 2014-04-25. Retrieved 2014-04-25. - Savage, Don; Jones, Tammy; Villard, Ray (1995). "Asteroid or Mini-Planet? Hubble Maps the Ancient Surface of Vesta". Hubble Site News Release STScI-1995-20. Retrieved 2006-10-17. - Marchis, Franck; Enriquez, J. E.; Emery, J. P.; Berthier, J.; Descamps, P. (2009). "The Origin of the Double Main Belt Asteroid (90) Antiope by Component-Resolved Spectroscopy". DPS meeting #41. American Astronomical Society. Retrieved 2009-11-08. - Workshop From Dust to Planetesimals - Michael Perryman: The Exoplanet Handbook. Cambridge University Press, 2011, ISBN 978-0-521-76559-6, , p. 226, at Google Books. - Morbidelli, A. "Origin and dynamical evolution of comets and their reservoirs". arXiv. - Gomes, R., Levison, H. F., Tsiganis, K., Morbidelli, A. 2005, "Origin of the cataclysmic Late Heavy Bombardment period of the terrestrial planets". Nature, 435, 466–469. - Morbidelli, A., Levison, H. F., Tsiganis, K., Gomes, R. 2005, "Chaotic capture of Jupiter's Trojan asteroids in the early Solar System". Nature, 435, 462–465. - Discovering the Essential Universe by Neil F. Comins (2001)
A torn (perforated) eardrum is not usually serious and often heals on its own without any complications. Complications sometimes occur such as hearing loss and infection in the middle ear. A small procedure to repair a perforated eardrum is an option if it does not heal by itself, especially if you have hearing loss. What is a perforated eardrum? A perforated eardrum is a hole or tear that has developed in the eardrum. It can affect hearing. The extent of hearing loss can vary greatly. For example, tiny perforations may only cause minimal loss of hearing. Larger perforations may affect hearing more severely. Also, if the tiny bones (ossicles) are damaged in addition to the eardrum then the hearing loss would be much greater than, say, a small perforation which is not close to the ossicles. With a perforation, you are at greater risk of developing an ear infection. This is because the eardrum normally acts as a barrier to bacteria and other germs that may get into the middle ear. What is the eardrum and how do we hear? The eardrum (also called the tympanic membrane) is a thin skin-like structure in the ear. It lies between the outer (external) ear and the middle ear. The ear is divided into three parts - the outer, middle and inner ear. Sound waves come into the outer ear and hit the eardrum, causing the eardrum to vibrate. Behind the eardrum are three tiny bones (ossicles). The vibrations pass from the eardrum to these middle ear bones. The bones then transmit the vibrations to the cochlea in the inner ear. The cochlea converts the vibrations to sound signals which are sent down a nerve to the brain, which we 'hear'. The middle ear behind the eardrum is normally filled with air. The middle ear is connected to the back of the nose by the Eustachian tube. This allows air in and out of the middle ear. Perforated eardrum symptoms There may be no symptoms, or there may be symptoms associated with the cause of the perforation - most often this is an infection. Possible symptoms include: - Changes in how you hear, that may range from slightly muffled to significant loss. - Noises in your ear - buzzing or ringing (tinnitus). - Aching or pain in your ear. - Itching in your ear. - Fluid leaking from your ear. - A high temperature. If your perforated eardrum is caused by a middle ear infection, you may have earache which suddenly gets worse when the drum perforates but then quickly gets better. This is because the perforation will allow pus to be released from behind the eardrum and relieves the pressure on the eardrum. The symptoms will usually pass once your eardrum has healed and any infection has been treated. Perforated eardrum causes - Infections of the middle ear, which can damage the eardrum. In this situation you often have a discharge from the ear as pus runs out from the middle ear. - Direct injury to the ear - for example, a punch to the ear. - A sudden loud noise - for example, from a nearby explosion. The shock waves and sudden sound waves can tear (perforate) the eardrum. This is often the most severe type of perforation and can lead to severe hearing loss and ringing in the ears (tinnitus). - Barotrauma. This occurs when you suddenly have a change in air pressure and there is a sharp difference in the pressure of air outside the ear and in the middle ear. For example, when descending in an aircraft. Pain in the ear due to a tense eardrum is common during height (altitude) changes when flying. However, a perforated eardrum only happens rarely in extreme cases. See the separate leaflet called Barotrauma of the Ear for more details. - Poking objects into the ear. This can sometimes damage the eardrum. - Grommets. These are tiny tubes that are placed through the eardrum. They are used to treat glue ear, as they allow any mucus that is trapped in the middle ear to drain out from the ear. When a grommet falls out, there is a tiny gap left in the eardrum. This heals quickly in most cases. How is a perforated eardrum diagnosed? A doctor can usually diagnose a torn (perforated) eardrum simply by looking into the ear with a special torch called an otoscope. However, sometimes it is difficult to see the eardrum if there is a lot of inflammation, wax or infection present in the ear. Treatment for a perforated eardrum No treatment is needed in most cases A torn (perforated) eardrum will usually heal by itself within 6-8 weeks. It is a skin-like structure and, like skin that is cut, it will usually heal. In some cases, a doctor may prescribe antibiotic medicines if there is an infection or risk of infection developing in the middle ear whilst the eardrum is healing. It is best to avoid water getting into the ear whilst it is healing. For example, your doctor may advise that you put some cotton wool or similar material into your outer ear whilst showering or washing your hair. It is best not to swim until the eardrum has healed. Occasionally, a perforated eardrum gets infected and needs antibiotics. Some ear drops can occasionally damage the nerve supply to the ear. Your doctor will select a type that does not have this risk, or may give you medication by mouth. Surgical treatment is sometimes considered A small operation is an option to treat a perforated drum that does not heal by itself. There are various techniques which may be used to repair the eardrum, depending on how severe the damage is. This operation may be called a myringoplasty or a tympanoplasty. These operations are usually successful in fixing the perforation and improving hearing. However, not all people with an unhealed perforation need treatment. Many people have a small permanent perforation with no symptoms or significant hearing loss. Treatment is mainly considered if there is hearing loss, as this may improve if the perforation is fixed. Also, swimmers may prefer to have a perforation repaired, as getting water in the middle ear can increase the risk of having an ear infection. If you have a perforation that has not healed by itself, a doctor who is an ear specialist will advise on whether treatment is necessary. Further reading and references Castro O, Perez-Carro AM, Ibarra I, et al; Myringoplasties in children: our results. Acta Otorrinolaringol Esp. 2013 Mar-Apr64(2):87-91. doi: 10.1016/j.otorri.2012.06.012. Epub 2012 Dec 20. Kumar N, Madkikar NN, Kishve S, et al; Using middle ear risk index and et function as parameters for predicting the outcome of tympanoplasty. Indian J Otolaryngol Head Neck Surg. 2012 Mar64(1):13-6. doi: 10.1007/s12070-010-0115-4. Epub 2011 Feb 2. British National Formulary (BNF); NICE Evidence Services (UK access only) Venekamp RP, Prasad V, Hay AD; Are topical antibiotics an alternative to oral antibiotics for children with acute otitis media and ear discharge? BMJ. 2016 Feb 4352:i308. doi: 10.1136/bmj.i308. Hello has anyone ever had a Perforated eardrum surgically repaired. My surgeon has almost scaremongered me into not wanting to have it due to what he says are risks,in particular further hearing...Boccia51 Disclaimer: This article is for information only and should not be used for the diagnosis or treatment of medical conditions. Patient Platform Limited has used all reasonable care in compiling the information but make no warranty as to its accuracy. Consult a doctor or other health care professional for diagnosis and treatment of medical conditions. For details see our conditions.
Binary Stars and Extrasolar Planets This learning activity utilizes text, imagery, and applet-simulations to introduce the concepts associated with Binary Star systems and the search for Extrasolar Planets (exoplanets for short). This is a rapidly developing field within Astronomy due to new technology allowing scientists to either directly image or better infer the presence of exosolar planets via gravitational pull, detection of change in visual magnitude, and other methods. The activity is separated into three parts to contour the experience into basic, advanced, and mathematical conceptual understanding. The basic level will introduce the general ideas of what is occurring. The advanced level will further the conceptual experience to fully understanding the concepts necessary to apply mathematical analysis upon either a binary star system or exoplanet. The mathematical analysis will introduce Astrophysics equations in order to give a taste of how scientists analyze the data they collect to aid in the discovery of exoplanets. Lastly, if you still seek more there is a way that you too can aid in the search for exoplanets without the need for a degree in the field or a large telescope! When you have completed this activity you should be able to; by level: Basic: Know terminology and have background-level knowledge of binary systems and exoplanets. Advanced: Know and understand select techniques pertaining to binary systems and how they can be applied to the search for exoplanets. Mathematical: Be able to use data to get practical information about either binary stars or exoplanets. Types of Binary Stars: - Optical Double: This is actually better used to actually define what constitutes a binary star. This is not a binary star system and is actually just stars that appear close to each other based upon our vantage point and can be, often are, very far apart. Thus, the definition of a binary star requires that the stars are gravitationally bound like the Earth and the other planets in our Solar System are to the Sun. - Visual Binary: This describes two gravitationally bound stars that are one of or a combination of the following: bright enough, far enough apart, and/or near enough to be seen separately by high-powered telescopes. Albireo, Mira, and Sirius are three examples of visual binaries and have images displaying both stars. Note that more stars can also be present, Polaris (the North Star) is actually a ternary system (three stars) with visual verification. - Astrometric Binary: Only one star is visible through current telescopes, but the movement of the star from the gravitational pull of the other star indicates the presence of its unseen companion star. It is a system in which a visible star and a dimmer companion orbit a common centre of mass and detection of such binary by astrmetric means are called astrometric binary. - Eclipsing Binary: Eclipsing binary stars means that the star system is oriented, from our vantage point, in such a way that one star passes in front of the other and then later passes behind the other. This is most notably recognized by a reduction in light due to the one passing in front of the other blocking some or all of the light from the one behind while when both remain visible they both show their entire light output. Algol, β Perseus, also known as the Demon star are the first eclipsing binary. - Spectroscopic Binary: A spectrum of light at rest produces wavelengths that remain at the same wavelength under all stationary conditions. However, when moving towards or away from the observer this spectrum shifts. This method uses the spectra received from stars to note shifts in the position of the bands. We are then able to know when a star is moving away (shifts the spectrum towards the red end) or towards us (towards blue end). This is aptly termed red shift and blue shift. All of the above, with the exception of an optical double, can also be applied for exoplanet discovery although the size, mass, and light emission for exoplanets make it considerably more difficult. Optical doubles are impossible for exoplanets since the overwhelming majority of their light is reflected from the star they orbit. The Light CurveEdit As shown, the light curve over the period (the length of the line) of orbit has two drops in luminosity. This would be the data generated by an eclipsing binary star system. The first drop is far greater, indicating that it is the passing of the colder, less luminous star in front of the hotter one. This means that, for every unit area, it is in effect blocking more light. It does not matter which star, colder or hotter, is larger. The second drop represents the hotter star passing in front of the cooler one. It is less because the light being blocked is that of the less luminous star which for every unit area sends less light towards the observer than the hot one. The duration of the drops should be approximately the same (not perfectly reflected in this image) as the smaller star disappears at the same rate behind as it blocks the light in front. Also, the duration reflects the time spent behind or in front of the other star. The diagonal slopes in and out represent the partial concealment of the star being progressive depending on the duration it takes to fully block the other star. Again, this can be applied to exoplanets, albeit far more difficult. Although the lack of light production by a planet assists by decreasing the luminosity to nearly nothing for the gap it makes, the gap of light it actually makes is so much smaller due to tiny radius relative to that of a star that it is nearly unnoticeable to even many modern telescopes. Center of MassEdit Center of mass is a point at which the combined mass of the two (or more) bodies involved in the rotation act as if they were concentrated at this single point. This point lies between the masses involved, and is closer to the larger masses then the smaller masses. If the system of rotating masses has a transverse velocity the motion can be represented by the motion of the center of mass with this same velocity. This can be related to the solar system in the sense that the Sun is (essentially, it too rotates some, in actuality, from the pull of the planets) the center of mass by which the planets orbit. However, when two bodies approach nearer masses this point is drawn out of being located within the heavier body and actually lies at a point in space directly between the two bodies. It remains equidistance from both stars (in the case of exactly equal mass) as the orbit in their elliptical orbits about it. The figure shows a center of mass located within the star, but note that they always remain on opposite sides of the center. More bodies makes the situation far more complex, but ultimately it is the same idea that at any given time the positional motion of all the bodies keeps the central rotation about the center of mass. Application of Basic IdeasEdit We now turn to the Applet to gain an active appreciation for the above concepts. To keep things simple, the Applet for this section has limited options. Once you open it, you can see the white dot as the star (the Sun for most of the options) and the blue dot as the planet. The Applet also greatly exaggerates the movement about the center of mass to exemplify the effect of gravitational binding between the two objects making them both move. One must be aware of this as the 10 Jupiter setting demonstrates the movement of two near-equal mass bodies whilst in reality ten times the mass of Jupiter is still a very miniscule mass compared to the sun (a mere 0.95%) and would not send the Sun on a crash-course through the solar system as this Applet shows. Now, let’s run some tests with the Applet (Open in new window if it fails to run in a new tab) and see if you can answer the questions posed correctly. - Run the simulation for a while with the standard set up of Sun/Jupiter. Then switch to the Sun/Earth. What do you notice about the center of mass? Set to Sun/Jupiter again and observe then switch to two Jupiters, then five Jupiters. What changes happen with each change in the mass ratio? - Let us suppose this is a visual binary system and two stars orbiting instead of a star and a planet. What would we be able to determine about the masses based on watching the rotation? If one or the other was not visible, would we still know it was a binary system? What type of binary system would it be? - Suppose we were looking at this as a spectroscopic binary. Would we be able to determine anything based on spectra obtained from repeatedly viewing this binary system? - Let us suppose it’s an eclipsing binary. If our vantage point was from the bottom of the screen, at what position(s) in the orbit would we see a dip in light? What about if our vantage point was from the right? - The center of the mass is inside the Sun when it is using the Earth because the difference of mass is so great, but with Jupiter it was not (remember that in reality it is always within the Sun though, regardless of the planets’ alignment). As the mass ratio increases the simulation leaves the center of mass at the center of rotation and Jupiter remains at the same location, but the Sun moves further away from the center of mass to maintain the center of mass at a proper distance relative to the mass ratio. - We would know that the one moving less is more massive. It would be a spectroscopic binary system because we would only see one star wobbling back and forth in space. - No, we would not be able to determine anything because it has no motion towards or away from us and just maintains a flattened appearance. Note, however, that this situation of perfectly perpendicular is very unlikely in practicality and some shift would likely be able to be obtained if the motion was fast enough. - At the bottom and the top when the two bodies are aligned from our vantage point. When we were observing from the right, it would be when the stars were on the right and left side of the orbit. Note that, matter where we placed the vantage point, there would always be two spots in which the light would be reduced and both would be when they came into alignment from our vantage point. That concludes the basics of binary stars and exoplanets. We now move on to flesh out more advanced concepts, some of which were alluded to here. This section looks into the more advanced concepts of: Kepler’s Laws of Planetary Motion, Newton’s version of Kepler’s Third Law, Orientation to Earth, Doppler Shift, and Proportionality. Kepler’s Laws of Planetary MotionEdit Kepler’s Laws were created to explain the motion of the planets in the Solar System. They are based upon Tycho Brahe’s very accurate measuring of the heavens over many years. They center on the principle of rejecting the geocentric model in favor of the heliocentric model as was necessary to match the data without using epicycles to explain the motion. They are as follows: - "The orbit of every planet is an ellipse with the sun at a focus." - "A line joining a planet and the sun sweeps out equal areas during equal intervals of time." - "The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit." The first law works on the principle of center of mass. Kepler determined the orbits were elliptical based upon Tycho’s measurements not fitting the theory using perfect circles which is logical to abide by today when dealing with binaries and exoplanets. The second law is also useful by relating the speed increase when the objects interacting approach each other and slow when departing. Velocity is a useful tool towards determining other information about binary stars and stars with exoplanets. The third law was the first true astrophysical equation. Although it only applies to objects orbiting the Sun (or other approximately equal mass stars) in its current form it is still useful and becomes greatly more useful when later manipulated by Newton. Kepler’s third law has become hugely helpful in determining the masses present in binary stars or exoplanets as will be used in the mathematical concepts portion. The third law in proportional form: - P in Years - a in AU Newton’s Law of Gravitation Applied to Kepler’s Third LawEdit Newton’s Law of Gravitation dictates that all objects in the universe are gravitationally bound to each other. This is drawn into Kepler’s Laws by the planet exerting a force on the star as well as the star on the planet and also separated the masses so one can apply the property to any objects that are gravitationally bound in a meaningful way (not so distant that the pull has no impact). Newton’s revised law: - P in Seconds - a in Meters - G is the Gravitational Constant: 6.673x10-11 - M and m in kilograms This law is commonly used to determine the total mass of visual binaries that then allows extrapolation to large amounts of other data. Orientation to EarthEdit The orientation to Earth is often known as inclination. The vast majority of stars provide an orientation of their satellites that is not eclipsing over the center of the star or perfectly upon the celestial sphere. It is for this reason that when we often are only able to extrapolate a minimum mass when viewing a star’s wobble because we do not know the inclination and, thus, are only able to detect the portion pulling the star on the plane of celestial sphere. Doppler Shift is the basis for a Spectroscopic Binary system. It is found by either two separate shifts in spectra or a single shift generated by an unseen companion on the primary star. It is important because the shifts can be used to find the radial velocity of both stars or the visible one if only one spectrum is observed. The equation to determine radial velocity is: c is the speed of light in a vacuum (3x108 m/s) λ0 is the rest wavelength of the spectra Δλ is the change from the rest wavelength to the measured wavelength vr is the radial velocity in m/s If the period is known this can be paired with it to determine the semi-major axis. Since all motion involving two objects revolves around a give-and-take relationship there arises the intrinsic relationship between many physical aspects of the two and their behavior with respect to each other. All of the following relations can be derived from Kepler’s Laws and Doppler Shift and associated mathematical principles. Note: Units do not matter as they are ratios and the units cancel. m is mass r is the separation distance a is the length of the semi-major axis α is the angular separation v is the velocity vr is the radial velocity Δλ is the change in wavelength due to Doppler Shift Application of Advanced IdeasEdit This application section will use a more technical Applet that allows for more intimate manipulation of the model to better experiment with some of the ideas in this section. - Leaving the model on the default settings, study the layout with radial velocity, the visible light spectrum, the earth view, and the privileged view. Observe the radial velocity and spectrum. How do they behave? What does the negative velocity indicate? Let the left (red end) rest wavelength be 650 nm. Calculate the shift in wavelength when the red and blue paths cross and when red peaks in the positive. - Experiment with the model. First, adjust the values so that the privileged view is the same as the earth view. What impact does this have on the Doppler Shift? Second, adjust it to attain an eclipsing system. Lastly, make the following changes: a = .8, e = .8, i = 30, w = -45. Note the radial velocity curve now. Explain this velocity curve and note the difficulty of being able to understand it if one of the curves did not exist. - Change the three solar mass star to be .0009535, Jupiter’s mass in terms of solar mass. Note the Doppler Shift for the star now. Explain what two changes can be made to the variables (not changing mass) that would aid in discovering this planet. One will be reflected in the model while the other will not readily do so. - Formulate a way to prove the concept of center of mass will lie equidistant from both bodies under ideal conditions and test it with the model. - Using Jupiter’s mass of 1.8986x1027 kg, the Sun's mass of 1.9891x1030 kg and average distance between them of 7.786x1011 m determine the period of Jupiter in seconds. Verify this value by using the simplifed version (5.2 AU, ~πx107 s in a year). - When the velocity is going into the positive the spectrum is revealing a redshift (moving away) and when it is negative it is showing a blueshift (towards). The negative velocity reflects the movement of the star towards the observer; this is Astronomer's customary view of the motion. When the paths cross the change is zero because that is the point at which radial velocity from the observer's point of view is zero. When the red is peaking in the positive (about 27 km/s) the change in wavelength is calculated to be 0.0585 nm. Work shown below. - Making the inclination 0 makes them match. This makes the Doppler Shift nothing because they are moving perfectly in the plane of the celestial sphere. Eclipsing is attained by making the inclination 90 degrees. The radial velocity curve gets rather difficult to read. It is important to note that the intersections are still zero so we can be certain of that. Further, the spikes in speed are when they are the closest as reflected in Kepler's Laws dictating equal area in equal time. With only one curve it's visibly quite difficult to infer the other and we are unable to be absolutely certain of values generated about such a binary star. - One change would be to decrease a and thereby bring the period down to a mere 0.03 years which would make observing the star for a couple days reveal an entire period worth of data so that an observer could recognize the fast, but small, wobble of the star. The other change would be to turn the inclination to 90 degrees so that it would become eclipsing and a dip in light could be spotted from the planet transiting the star. - Set both masses to be the same. They will then follow the exact same path around each other. - Answer worked out below and yes they do match up fairly closely. Doppler Shift worked out: can be rewritten Kepler's Third Law worked out: can be rewritten s2 → s The simplified version: yr2 → yr which simplifies to yr syr-1 s. This concludes the advanced concepts that can be associated with binary stars and exoplanets. To fully explore the nature of these entities continue onto the mathematical concepts that will utilize real stellar data to extrapolate more data about binary stars. To Be Added If you are interested in doing further exploration of exoplanets and helping the scientific community in the process you can go to visit Systemic. The Systemic site has a console to download which contains radial velocity data for stars which are suspect or known to be supporting exoplanets. You are able, with the console, to match a curve to the data by adding planets and controlling the planetary mass and distance until it is a good fit for the data accumulated so far. Upon registering, you are able to submit the curve where the system will compare it to other curves for the same data set generated by other members and, ultimately, as more data becomes available and submissions narrow down possible candidates to the stellar movement your curve fitting may contribute to the verification of the existence of one or more exoplanets orbiting a distant star. For further explanation of the software, check out their tutorials.
Department of Physics, Middlebury College Modern Physics Laboratory In this experiment you will observe the behavior of electrons in a magnetic field and determine a value for the electron charge-to-mass ratio e/m. The apparatus consists of a large vacuum tube supported at the center of a pair of Helmholtz coils, as seen in the photograph of Fig. 1. The vacuum tube contains an electron gun which produces a collimated beam of electrons that is deflected by a magnetic field. An electron gun has two main parts: a filament that produces electrons through thermionic emission, and an anode that is placed at high positive potential so as to accelerate thermal electrons from the filament to the main region of the vacuum tube, as shown in Fig. 2. The magnetic field produced by the Helmholtz coils deflects the electrons into circular trajectories and these paths are made visible through collisions by the electrons with a trace amount of mercury vapor present in the vacuum tube. A complete description of the e/m vacuum tube may be found in Ref. 1. For two coaxial coils of radius a and separation distance d, the magnitude of the magnetic field B at the center of the arrangement is given by2 µoN I a2 [ a2 + (d/2)2 ]3/2 where N is the number of turns in each coil and I is the current through each coil. In the Helmholtz configuration one chooses d = a so that the magnetic field in the central region is very homogeneous. The magnitude of the magnetic field in the central region is then given by After being accelerated by a potential difference V in the electron gun, electrons execute circular motion in the Helmholtz magnetic field region. From the radius of curvature R of this motion one can compute the electron charge-to-mass ratio from (1) Familiarize yourself with the experimental apparatus by making a schematic diagram of the electrical connections. Do not turn on voltages until you understand the role of each of the power supplies. (2) Observe the electron beam as the Helmholtz field and anode potential are independently varied. What is the effect of reversing the Helmholtz field direction? Verify directly that the electrons in the beam have a negative charge. In your laboratory notebook, outline a method for determining the sign of the electron charge. (3) With fixed anode voltages V of 20, 40, 80, and 120 V, determine the Helmholtz coil current I necessary to deflect the electron beam to each of the five cross bar pins. The location of these five pins is given in Fig. 2. These 20 data points will be referred to as "Data at Fixed Anode Potential". (4) With fixed Helmholtz coil currents I of 2.5, 3.5, 4.5, and 5.5 A, determine the anode potential V necessary to deflect the electron beam to each of the five cross bar pins. These 20 data points will be referred to as "Data at Fixed Helmholtz Field". (5) Each Helmholtz coil has N = 72 turns of wire and a center radius a = 33 cm. Record these values in your laboratory notebook. Using Eq. (3) it would be a simple matter to calculate 40 values of e/m from the 40 data points you have taken. Unfortunately, the earth's magnetic field is present in this experiment and failure to take proper account of it will lead to an inaccurate value for e/m. The e/m apparatus has been positioned so that the Helmholtz coils are coaxial with the earth's magnetic field. In Middlebury, the earth's magnetic field direction is toward the ground, making an angle of approximately 40o with the vertical. In your experimental configuration, this means that the earth's magnetic field partially cancels the Helmholtz field B. Because some of the Helmholtz field is simply offsetting the earth's field, your experimental values of the Helmholtz current I are slightly larger than they would be in the absence of the earth's field. In the absence of the earth's magnetic field Eq. (3) predicts that at fixed anode voltage V the Helmholtz current I is directly proportional to 1/R. Therefore, if you plot I versus 1/R you expect a straight line passing through the origin as in the dashed line of Fig. 3. However, if you plot your experimental "Data at Fixed Anode Voltage" for a single, fixed value of V, you will obtain a straight line much more like the solid line of Fig. 3. The solid line has the same slope as the dashed line determined from Eq. (3), but it does not pass through the origin. This failure to pass through the origin can be attributed directly to the presence of the earth's magnetic field in this experiment. Note that when the earth's field is cancelled exactly, the electron trajectories are straight and their radii of curvature are infinite so that 1/R = 0. Thus, IE, the I-axis intercept of the solid line in Fig. 3, represents the Helmholtz coil current necessary to cancel the earth's magnetic field. You must therefore subtract IE from all measured data before computing e/m with Eq. (3). (1) Refer to Ref. 2 and derive Eq. (1), (2), and (3). Supply the necessary fundamental constants so that you have a simple numerical formula for reducing your experimental data. (2) Using axes of I and 1/R, plot all 20 measurements of the "Data at Fixed Anode Voltage" set on a large (3 ft x 3ft) sheet of graph paper. For each of the four fixed voltages, carefully draw a straight line through your data points. The common intercept of these four lines on the I axis is IE. Use this value of IE to correct all Helmholtz current measurements in this experiment. (3) Substitute your value of IE into Eq. (2) to compute the magnetic field BE that the Helmholtz coils cancel. Compare your value of BE to the magnitude of the earth's magnetic field at Middlebury. (4) After making the correction for IE discussed above, use your entire set of 40 measurements to obtain 40 values of e/m. Histogram these 40 values of e/m in an appropriate figure. Determine the mean of these e/m values and their standard deviation. (5) Make reasonable estimates of the systematic and random uncertainties in your measurements of the experimental quantities a, d, V, I, and R. Use your laboratory data to make a final determination of e/m along with an appropriate uncertainty estimate. Use the fundamental constants of Appendix A to compute the currently accepted value for e/m along with the currently accepted uncertainty. Compare this accepted value for e/m to your experimental value. 1. Instructions for Use of No. 0623B e/m Apparatus, manual (Sargeant-Welch Scientific Company, Skokie, IL). 2. D. Halliday and R. Resnick, Fundamentals of Physics, 2nd ed. (John Wiley & Sons, New York, 1981), pp. 566-567. Table of Contents
The immense Chicxulub crater is a remnant of one of the most consequential days in the history of life on Earth. The asteroid strike triggered the Cretaceous-Paleogene, or K-Pg, mass extinction. The findings further support the theory that the dramatic impact was the cause of the mass extinction The Chicxulub impact crater, roughly 180 kilometers in diameter, is the best-preserved large impact structure on Earth. It is also the best example of the types of impact craters that were produced on Earth during a period of heavy bombardment more than 3800 million years ago. Space dust found in Chicxulub crater has proven that the impact of the asteroid and the extinction of the dinosaurs are “indisputably linked.” Some 66 million years ago, all non-avian dinosaurs on Earth were abruptly wiped out, and a growing body of research identifies the culprit as an asteroid impact in present-day Mexico. A Chicxulub-kráter (ejtsd: tʃikʃuˈlub) egy óriási, ősi becsapódási kráter, melynek nyomai a mexikói Yucatán-félszigeten, illetve annak közelében, a tengeraljzaton maradtak fenn. 170 kilométeres átmérőjével az ismert becsapódási kráterek közül a közepes méretűek közé tartozik. 15 Feb 2021 The Chicxulub impactor, as it's known, was a plummeting asteroid or Development of Space and is pursuing a master's degree at the New 12 Nov 2018 Chicxulub Puerto, Mexico, is the centre of the impact crater that scientists believe was made when the asteroid that wiped out the dinosaurs 15 May 2017 How different Earth's history might have been if the space rock had struck a different Chicxulub Crater - The impact that changed life on Earth. Some 66 million years ago, dinosaurs on Earth were wiped out, and a growing body of researchers supports the theory that the main culprit was an asteroid impact in what is present-day Mexico as against the theory that volcanism in Deccan A team of scientists dug into Chicxulub crater, which is the scar left over from the impact that is located on the Yucatan Peninsula in Mexico.There, they retrieved rocks from between 1,640 feet The view from space lets scientists see some of Chicxulub's surface features that are not nearly so obvious from the ground. Satellite images showing a necklace of sink holes, called cenotes, across the Yucatan's northern tip are what first caught the attention of NASA researchers Drs. Kevin Pope, Adriana Ocampo and Charles Duller in 1990. New evidence found in the Chicxulub crater suggests the black carbon that filled the atmosphere after an asteroid struck Earth 66 million years ago was caused by the impact and not massive wildfires. Chicxulub had little to do with the extinction at the K/T. Chicxulub data was released by the oil companies because it was in a good place to hide other nearby impacts of interstellar objects. Most theories suggest Chicxulub was a massive asteroid; hundreds May 3, 2016 An expedition eighteen miles off the coast of Mexico is drilling into the seafloor to probe the crater created by the Chicxulub asteroid, the space Oct 30, 2020 PRNewswire/ -- A new study reveals that the Chicxulub impact crater and its A principal author of that concept, Universities Space Research Nov 17, 2016 An artist's impression of what the Chicxulub crater might have looked like soon after an asteroid struck the Yucatán Peninsula in Mexico. Dec 19, 2019 Hike around impact craters like Meteor Crater in Arizona or swim in a cenote near Chicxulub Crater in Mexico. Learn about volcanic craters like Dec 20, 2018 The cataclysmic Chicxulub impact roughly 66 million years ago spawned a tsunami that produced wave heights of several meters in distant Feb 18, 2021 Where the Chicxulub impactor came from is a matter of debate. In spite of these immense measurements, the crater is hard to see, even if you're standing on its rim. To get a good map, NASA researchers examined it from space. Ten years before the 1990 discovery of the Chicxulub crater, physicist Luis Alvarez and geologist Walter Alvarez, a father-son team, proposed a theory about the impact that created it. That anomaly is shown here with a picture of the analyzed core sample that contains debris that settled through the post-impact atmosphere. Data from Goderis et al. Mar 4, 2010 The asteroid that ended the 160 million-year reign of the dinosaurs was about 10,000. asteroid impact Chicxulub illustration (NASA).jpg. New evidence found in the Chicxulub crater suggests the black carbon that filled the atmosphere after an asteroid struck Earth 66 million years ago was caused by the impact and not massive wildfires. 2020-05-26 · The asteroid impact that formed the 66 Ma Chicxulub crater had a profound and catastrophic effect on Earth’s environment, Space Sci. 51, 831–845 (2003). ADS Google Scholar In spite of these immense measurements, the crater is hard to see, even if you're standing on its rim. To get a good map, NASA researchers examined it from space. Ten years before the 1990 discovery of the Chicxulub crater, physicist Luis Alvarez and geologist Walter Alvarez, a father-son team, proposed a theory about the impact that created it. 2021-02-15 · The Chicxulub impactor, as it’s known, was a plummeting asteroid or comet that left behind a crater off the coast of Mexico that spans 93 miles and goes 12 miles deep. Its devastating impact brought the reign of the dinosaurs to an abrupt and calamitous end, scientists say, by triggering their sudden mass extinction, along with the end of almost three-quarters of the plant and animal species Core material retrieved from the Chicxulub crater site is under study. Photo: IODP Though this space dust is present in low quantities all over Earth, the study published in Science Advances found it is four times more concentrated in the Chicxulub impact crater than in the surrounding area. Wester hotell västerås Photo: IODP Though this space dust is present in low quantities all over Earth, the study published in Science Advances found it is four times more concentrated in the Chicxulub impact crater than in the surrounding area. Buffalo skor 90 talet wsp göteborg geoteknik bygglov skellefteå kontakt my gizmo wont charge Se hela listan på lpi.usra.edu Its heat, reaching out over millions of miles of space, ensures that the earth At around the same time as the Chicxulub crater was formed, the Jo, kommer det en ny Chicxulub lär dock en del personer känna till det mer än en vecka http://www.space.com/2452-giant-crater- ction.html. Denna bild av jätteplaneten Jupiter, av NASA: s Hubble Space som den kilometerbredda (1,6 kilometer breda) Meteor Crater i Arizona och den 93 mil breda (150 km) -omfattande) Chicxulub-krater i Mexikanska golfen. Ibland har asteroidpåverkan på jorden skarrat ytan på den blå planeten och Åttonde är 106-186.411-milsradien Chicxulub-krater i Yucatan, Mexiko, har National Aeronautics and Space Administration (NASA) i USA ett En ny studie som analyserade berg från djupt inne i Chicxulub-slagkratern hjälper till att hände under de första 24 timmarna efter asteroidpåverkan som dömde dinosaurierna. Fem coola saker att veta om James Webb Space Telescope skapade Meteor Crater i Arizona för cirka 50 000 år sen.
Adjectives are one of the English parts of speech, although they were historically classed together with the nouns. Certain words that were traditionally considered to be adjectives, including the, this, my, etc., are today usually classed separately, as determiners. Adjective comes from Latin (nōmen) adjectīvum "additional (noun)", a calque of Ancient Greek: ἐπίθετον (ὄνομα), translit. epítheton (ónoma), lit. 'additional (noun)'. In the grammatical tradition of Latin and Greek, because adjectives were inflected for gender, number, and case like nouns (a process called declension), they were considered a subtype of noun. The words that are today typically called nouns were then called substantive nouns (nōmen substantīvum). The terms noun substantive and noun adjective were formerly used in English, but the terms are now obsolete. Types of use A given occurrence of an adjective can generally be classified into one of three kinds of use: - Attributive adjectives are part of the noun phrase headed by the noun they modify; for example, happy is an attributive adjective in "happy people". In some languages, attributive adjectives precede their nouns; in others, they follow their nouns; and in yet others, it depends on the adjective, or on the exact relationship of the adjective to the noun. In English, attributive adjectives usually precede their nouns in simple phrases, but often follow their nouns when the adjective is modified or qualified by a phrase acting as an adverb. For example: "I saw three happy kids", and "I saw three kids happy enough to jump up and down with glee." See also Postpositive adjective. - Predicative adjectives are linked via a copula or other linking mechanism to the noun or pronoun they modify; for example, happy is a predicate adjective in "they are happy" and in "that made me happy." (See also: Predicative expression, Subject complement.) - Nominal adjectives act almost as nouns. One way this can happen is if a noun is elided and an attributive adjective is left behind. In the sentence, "I read two books to them; he preferred the sad book, but she preferred the happy", happy is a nominal adjective, short for "happy one" or "happy book". Another way this can happen is in phrases like "out with the old, in with the new", where "the old" means, "that which is old" or "all that is old", and similarly with "the new". In such cases, the adjective functions may function as a mass noun (as in the preceding example). In English, it may also function as a plural count noun denoting a collective group, as in "The meek shall inherit the Earth", where "the meek" means "those who are meek" or "all who are meek". Adjectives feature as a part of speech (word class) in most languages. In some languages, the words that serve the semantic function of adjectives are categorized together with some other class, such as nouns or verbs. In the phrase "a Ford car", "Ford" is unquestionably a noun, but its function is adjectival: to modify "car". In some languages adjectives can function as nouns: "uno rojo", "a red (object)" (Span.). As for "confusion" with verbs, rather than an adjective meaning "big", a language might have a verb that means "to be big", and could then use an attributive verb construction analogous to "big-being house" to express what English expresses as "big house". Such an analysis is possible for the grammar of Standard Chinese, for example. Different languages do not always use adjectives in exactly the same situations. For example, where English uses to be hungry (hungry being an adjective), Dutch, French, and Spanish use honger hebben, avoir faim, and tener hambre respectively (literally "to have hunger", the words for "hunger" being nouns). Similarly, where Hebrew uses the adjective זקוק zaqūq (roughly "in need of"), English uses the verb "to need". In languages which have adjectives as a word class, they are usually an open class; that is, it is relatively common for new adjectives to be formed via such processes as derivation. However, Bantu languages are well known for having only a small closed class of adjectives, and new adjectives are not easily derived. Similarly, native Japanese adjectives (i-adjectives) are considered a closed class (as are native verbs), although nouns (an open class) may be used in the genitive to convey some adjectival meanings, and there is also the separate open class of adjectival nouns (na-adjectives). Many languages, including English, distinguish between adjectives, which qualify nouns and pronouns, and adverbs, which mainly modify verbs, adjectives, and other adverbs. Not all languages have exactly this distinction and many languages, including English, have words that can function as both. For example, in English, fast is an adjective in "a fast car" (where it qualifies the noun car), but an adverb in "he drove fast" (where it modifies the verb drove). - Eine kluge neue Idee. - A clever new idea. - Eine klug ausgereifte Idee. - A cleverly developed idea. A German word like klug ("clever(ly)") takes endings when used as an attributive adjective, but not when used adverbially. (It also takes no endings when used as a predicative adjective: er ist klug, "he is clever".) Whether these are distinct parts of speech or distinct usages of the same part of speech is a question of analysis. It can be noted that while German linguistic terminology distinguishes adverbiale from adjektivische Formen, German refers to both as Eigenschaftswörter ("property words"). Linguists today distinguish determiners from adjectives, considering them to be two separate parts of speech (or lexical categories), but formerly determiners were considered to be adjectives in some of their uses. In English dictionaries, which typically still do not treat determiners as their own part of speech, determiners are often recognizable by being listed both as adjectives and as pronouns. Determiners are words that are neither nouns nor pronouns, yet reference a thing already in context. Determiners generally do this by indicating definiteness (as in a vs. the), quantity (as in one vs. some vs. many), or another such property. An adjective acts as the head of an adjective phrase or adjectival phrase (AP). In the simplest case, an adjective phrase consists solely of the adjective; more complex adjective phrases may contain one or more adverbs modifying the adjective ("very strong"), or one or more complements (such as "worth several dollars", "full of toys", or "eager to please"). In English, attributive adjective phrases that include complements typically follow the noun that they qualify ("an evildoer devoid of redeeming qualities"). Other modifiers of nouns In many languages, including English, it is possible for nouns to modify other nouns. Unlike adjectives, nouns acting as modifiers (called attributive nouns or noun adjuncts) usually are not predicative; a beautiful park is beautiful, but a car park is not "car". The modifier often indicates origin ("Virginia reel"), purpose ("work clothes"), semantic patient ("man eater") or semantic subject ("child actor"); however, it may generally indicate almost any semantic relationship. It is also common for adjectives to be derived from nouns, as in boyish, birdlike, behavioral (behavioural), famous, manly, angelic, and so on. Many languages have special verbal forms called participles that can act as noun modifiers (alone or as the head of a phrase). Sometimes participles develop into pure adjectives. Examples of this in English include relieved (the past participle of the verb relieve, used as an adjective in sentences such as "I am so relieved to see you"), spoken (as in "the spoken word"), and going (the present participle of the verb go, used as an adjective in such phrases as "the going rate"). Other constructs that often modify nouns include prepositional phrases (as in "a rebel without a cause"), relative clauses (as in "the man who wasn't there"), and infinitive phrases (as in "a cake to die for"). Some nouns can also take complements such as content clauses (as in "the idea that I would do that"), but these are not commonly considered modifiers. For more information about possible modifiers and dependents of nouns, see Components of noun phrases. In many languages, attributive adjectives usually occur in a specific order. In general, the adjective order in English can be summarised as: opinion, size, age or shape, colour, origin, material, purpose. This sequence (with age preceding shape) is sometimes referred to by the mnemonic OSASCOMP. Other language authorities, like the Cambridge Dictionary, alternatively state that shape precedes rather than follows age. - Determiners and postdeterminers – articles, numerals and other limiters (e.g. three blind mice) - Observation/opinion – limiter adjectives (e.g. a real hero, a perfect idiot) and adjectives subject to subjective measure (e.g. beautiful, interesting), or with a value (e.g. good, bad, costly) - Size – adjectives denoting physical size (e.g. tiny, big, extensive) - Age – adjectives denoting age (e.g., young, old, new, ancient, six-year-old) - Shape – adjectives describing more detailed physical attributes than overall size (e.g. round, sharp, swollen) - Colour – adjectives denoting colour (e.g. white, black, pale) - Origin – denominal adjectives denoting source (e.g. French, volcanic, extraterrestrial) - Material – denominal adjectives denoting what something is made of (e.g., woollen, metallic, wooden) - Qualifier/purpose – final limiter, which sometimes forms part of the (compound) noun (e.g., rocking chair, hunting cabin, passenger car, book cover) This means that in English, adjectives pertaining to size precede adjectives pertaining to age ("little old", not "old little"), which in turn generally precede adjectives pertaining to color ("old white", not "white old"). So, one would say "One (quantity) nice (opinion) little (size) old (age) round (shape) [or round old] white (color) brick (material) house." When several adjectives of the same type are used together, they are ordered from general to specific, like "lovely intelligent person" or "old medieval castle". This order may be more rigid in some languages than others; in some, like Spanish, it may only be a default (unmarked) word order, with other orders being permissible. Other languages, such as Tagalog, follow their adjectival orders as rigidly as English. The normal adjectival order of English may be overridden in certain circumstances, especially when one adjective is being fronted. In addition, the usual order of adjectives in English would result in the phrase "the bad big wolf" (opinion before size), but instead the usual phrase is "the big bad wolf", perhaps because the ablaut reduplication rule that high vowels precede low vowels overrides the normal order of adjectives. Owing partially to borrowings from French, English has some adjectives that follow the noun as postmodifiers, called postpositive adjectives, as in time immemorial and attorney general. Adjectives may even change meaning depending on whether they precede or follow, as in proper: They live in a proper town (a real town, not a village) vs. They live in the town proper (in the town itself, not in the suburbs). All adjectives can follow nouns in certain constructions, such as tell me something new. In many languages, some adjectives are comparable. For example, a person may be "polite", but another person may be "more polite", and a third person may be the "most polite" of the three. The word "more" here modifies the adjective "polite" to indicate a comparison is being made, and "most" modifies the adjective to indicate an absolute comparison (a superlative). In English, many adjectives can take the suffixes "-er" and "-est" (sometimes requiring additional letters before the suffix; see forms for far below) to indicate the comparative and superlative forms, respectively: - "great", "greater", "greatest" - "deep, "deeper", "deepest" Some adjectives are irregular in this sense: - "good", "better", "best" - "bad", "worse", "worst" - "many", "more", "most" (sometimes regarded as an adverb or determiner) - "little", "less", "least" Some adjectives can have both regular and irregular variations: - "old", "older", "oldest" - "far", "farther", "farthest" - "old", "elder", "eldest" - "far", "further", "furthest" Another way to convey comparison is by incorporating the words "more" and "most". There is no simple rule to decide which means is correct for any given adjective, however. The general tendency is for simpler adjectives, and those from Anglo-Saxon to take the suffixes, while longer adjectives and those from French, Latin, Greek do not—but sometimes sound of the word is the deciding factor. Many adjectives do not naturally lend themselves to comparison. For example, some English speakers would argue that it does not make sense to say that one thing is "more ultimate" than another, or that something is "most ultimate", since the word "ultimate" is already absolute in its semantics. Such adjectives are called non-comparable or absolute. Nevertheless, native speakers will frequently play with the raised forms of adjectives of this sort. Although "pregnant" is logically non-comparable (either one is pregnant or not), one may hear a sentence like "She looks more and more pregnant each day". Likewise "extinct" and "equal" appear to be non-comparable, but one might say that a language about which nothing is known is "more extinct" than a well-documented language with surviving literature but no speakers, while George Orwell wrote "All animals are equal, but some animals are more equal than others". These cases may be viewed as evidence that the base forms of these adjectives are not as absolute in their semantics as is usually thought. Comparative and superlative forms are also occasionally used for other purposes than comparison. In English comparatives can be used to suggest that a statement is only tentative or tendential: one might say "John is more the shy-and-retiring type," where the comparative "more" is not really comparing him with other people or with other impressions of him, but rather, could be substituting for "on the whole". In Italian, superlatives are frequently used to put strong emphasis on an adjective: Bellissimo means "most beautiful", but is in fact more commonly heard in the sense "extremely beautiful". Attributive adjectives, and other noun modifiers, may be used either restrictively (helping to identify the noun's referent, hence "restricting" its reference) or non-restrictively (helping to describe an already-identified noun). For example: - "He was a lazy sort, who would avoid a difficult task and fill his working hours with easy ones." - "difficult" is restrictive – it tells us which tasks he avoids, distinguishing these from the easy ones: "Only those tasks that are difficult". - "She had the job of sorting out the mess left by her predecessor, and she performed this difficult task with great acumen." - "difficult" is non-restrictive – we already know which task it was, but the adjective describes it more fully: "The aforementioned task, which (by the way) is difficult" In some languages, such as Spanish, restrictiveness is consistently marked; for example, in Spanish la tarea difícil means "the difficult task" in the sense of "the task that is difficult" (restrictive), whereas la difícil tarea means "the difficult task" in the sense of "the task, which is difficult" (non-restrictive). In English, restrictiveness is not marked on adjectives, but is marked on relative clauses (the difference between "the man who recognized me was there" and "the man, who recognized me, was there" being one of restrictiveness). In some languages, adjectives alter their form to reflect the gender, case and number of the noun that they describe. This is called agreement or concord. Usually it takes the form of inflections at the end of the word, as in Latin: puella bona (good girl, feminine singular nominative) puellam bonam (good girl, feminine singular accusative/object case) puer bonus (good boy, masculine singular nominative) pueri boni (good boys, masculine plural nominative) buachaill maith (good boy, masculine) girseach mhaith (good girl, feminine) Often, distinction is made here between attributive and predicative usage. In English, adjectives never agree, and in French, they always agree. In German, they agree only when they are used attributively, and in Hungarian, they agree only when they are used predicatively: The good (Ø) boys. The boys are good (Ø). Les bons garçons. Les garçons sont bons. Die braven Jungen. Die Jungen sind brav (Ø). A jó (Ø) fiúk. A fiúk jók. - Attributive verb - Flat adverb - List of eponymous adjectives in English - List of English collateral adjectives - Noun adjunct - Postpositive adjective - Proper adjective - "Adjectives". Capital Community College Foundation. Retrieved 20 March 2012. - Trask, R.L. (2013). Dictionary of Grammatical Terms in Linguistics. Taylor & Francis. p. 188. ISBN 978-1-134-88420-9. - adjectivus. Charlton T. Lewis and Charles Short. A Latin Dictionary on Perseus Project. - ἐπίθετος. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project - Mastronarde, Donald J. Introduction to Attic Greek. University of California Press, 2013. p. 60. - McMenomy, Bruce A. Syntactical Mechanics: A New Approach to English, Latin, and Greek. University of Oklahoma Press, 2014. p. 8. - Order of adjectives, British Council. - R.M.W. Dixon, "Where Have all the Adjectives Gone?" Studies in Language 1, no. 1 (1977): 19–80. - Dowling, Tim (13 September 2016). "Order force: the old grammar rule we all obey without realising". The Guardian. The Guardian. - Adjectives: order (from English Grammar Today), in the Cambridge Advanced Learner's Dictionary online - R. Declerck, A Comprehensive Descriptive Grammar of English (1991), p. 350: "When there are several descriptive adjectives, they normally occur in the following order: characteristic — size — shape — age — colour — [...]" - Dixon, R.M.W. (1977). "Where have all the adjectives gone?". Studies in Language. 1: 19–80. doi:10.1075/sl.1.1.04dix. - Dixon, R.M.W.; R. E. Asher (Editor) (1993). The Encyclopedia of Language and Linguistics (1st ed.). Pergamon Press Inc. pp. 29–35. ISBN 0-08-035943-4.CS1 maint: Extra text: authors list (link) - Dixon, R.M.W. (1999). Adjectives. In K. Brown & T. Miller (Eds.), Concise encyclopedia of grammatical categories (pp. 1–8). Amsterdam: Elsevier. ISBN 0-08-043164-X. - Warren, Beatrice. (1984). Classifying adjectives. Gothenburg studies in English (No. 56). Göteborg: Acta Universitatis Gothoburgensis. ISBN 91-7346-133-4. - Wierzbicka, Anna (1986). "What's in a noun? (or: How do nouns differ in meaning from adjectives?)". Studies in Language. 10 (2): 353–389. doi:10.1075/sl.10.2.05wie. |Look up predicative adjective in Wiktionary, the free dictionary.| |Look up adjective in Wiktionary, the free dictionary.|
The sequence is a mathematical term that refers to an ordered series of positive integers that follow a specified pattern. Different types of sequences are arithmetic, geometric, harmonic, and Fibonacci. The Fibonacci sequence is a set of numbers in which every next number is the sum of the two preceding numbers. Beginning with ”zero” and “one”, the sequence is as follows: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34. The Fibonacci sequence is a pattern-following series of infinite numbers. By combining the two preceding numbers in the sequence together, the next number in the sequence can be discovered. Fn = Fn-1 + Fn-2, where n represents a number in the series and F represents the Fibonacci number value. The Fibonacci Sequence is a set of numbers that begins with 0 and ends with 1, with each number equal to the sum of the two numbers before it. The Fibonacci Sequence is composed of the numbers 0, 1, 1, 2, 3, 5, 8, 13, 21,34, and so on. “1” is the third term, and we get 1 by adding the first and second terms. (In other words, 0+1 Equals 1) Similarly, we get 2 (1+1 = 2) by adding the 2nd and 3rd terms. We get 3 (1+2) by adding the third and fourth terms, and so on. For example, combining 21 and 34 yields the next term ie 55 Fibonacci Sequence Formula The Fibonacci numbers form a sequence when each number is the sum of the two before it. ‘0’ and ‘1’ are the first two. Mathematically the formula for the Fibonacci series is given: Xn= Xn-1 + Xn-2 where n is the number in the series Let us start with 0 and 1 So the third number will be 0 + 1 =1 The fourth number will be 1 +1 =2 The fifth number will be 2 + 1= 3 The sixth number will be 3 +2=5 And so on. Example 1: Find the Fibonacci number when n=6, using recursive relation. The formula to calculate the Fibonacci sequence is: Xn = Xn-1+Xn-2 Take: F0=0 and F1=1 Using the formula, we get X2 = X1+X0 = 1+0 = 1 X3 = X2+X1 = 1+1 = 2 X4 = X3+X2 = 2+1 = 3 X5 = X4+X3 = 3+2 = 5 X6=X5 + X4 = 8 Therefore, the Fibonacci number is 8. Uses of Fibonacci Sequence The Fibonacci sequence can be easily found in nature. Many natural things follow the Fibonacci sequence, as we can see. It can be found in biological settings such as tree branching, the arrangement of leaves on a stem, pineapple fruit sprouts, artichoke flowering, unfurling ferns, and the arrangement of pine cone bracts, among others. The Fibonacci sequence is employed in engineering. The Fibonacci sequence is used in computer data structures and sorting algorithms, financial engineering, audio compression, and architectural engineering. The Fibonacci sequence can be found in nature in the spirals of sunflower seeds and the form of a snail’s shell. Basically, Fibonacci numbers are a series of numbers derived from adding the two numbers preceding it. In other words, the next number in a series is the result of adding two previous numbers. For instance, let’s use 0 and 1 for the first two integers in the series. We get the third number as 1 when we add 0 and 1. The fourth number is obtained by adding the second and third numbers (i.e. 1 and 1), and the process continues in this manner. As a result, the Fibonacci sequence becomes 0, 1, 1, 2, 3, 5, 8,13, 21,34,…… As a result, the series obtained is known as the Fibonacci number series.
monetary policyArticle Free Pass The usual goals of monetary policy are to achieve or maintain full employment, to achieve or maintain a high rate of economic growth, and to stabilize prices and wages. Until the early 20th century, monetary policy was thought by most experts to be of little use in influencing the economy. Inflationary trends after World War II, however, caused governments to adopt measures that reduced inflation by restricting growth in the money supply. Monetary policy is the domain of a nation’s central bank. The Federal Reserve System (commonly called the Fed) in the United States and the Bank of England of Great Britain are two of the largest such “banks” in the world. Although there are some differences between them, the fundamentals of their operations are almost identical and are useful for highlighting the various measures that can constitute monetary policy. The Fed uses three main instruments in regulating the money supply: open-market operations, the discount rate, and reserve requirements. The first is by far the most important. By buying or selling government securities (usually bonds), the Fed—or a central bank—affects the money supply and interest rates. If, for example, the Fed buys government securities, it pays with a check drawn on itself. This action creates money in the form of additional deposits from the sale of the securities by commercial banks. By adding to the cash reserves of the commercial banks, then, the Fed enables those banks to increase their lending capacity. Consequently, the additional demand for government bonds bids up their price and thus reduces their yield (i.e., interest rates). The purpose of this operation is to ease the availability of credit and to reduce interest rates, which thereby encourages businesses to invest more and consumers to spend more. The selling of government securities by the Fed achieves the opposite effect of contracting the money supply and increasing interest rates. The second tool is the discount rate, which is the interest rate at which the Fed (or a central bank) lends to commercial banks. An increase in the discount rate reduces the amount of lending made by banks. In most countries the discount rate is used as a signal, in that a change in the discount rate will typically be followed by a similar change in the interest rates charged by commercial banks. The third tool regards changes in reserve requirements. Commercial banks by law hold a specific percentage of their deposits and required reserves with the Fed (or a central bank). These are held either in the form of non-interest-bearing reserves or as cash. This reserve requirement acts as a brake on the lending operations of the commercial banks: by increasing or decreasing this reserve-ratio requirement, the Fed can influence the amount of money available for lending and hence the money supply. This tool is rarely used, however, because it is so blunt. The Bank of England and most other central banks also employ a number of other tools, such as “treasury directive” regulation of installment purchasing and “special deposits.” Historically, under the gold standard of currency valuation, the primary goal of monetary policy was to protect the central banks’ gold reserves. When a nation’s balance of payments was in deficit, an outflow of gold to other nations would result. In order to stem this drain, the central bank would raise the discount rate and then undertake open-market operations to reduce the total quantity of money in the country. This would lead to a fall in prices, income, and employment and reduce the demand for imports and thus would correct the trade imbalance. The reverse process was used to correct a balance of payments surplus. The inflationary conditions of the late 1960s and ’70s, when inflation in the Western world rose to a level three times the 1950–70 average, revived interest in monetary policy. Monetarists such as Harry G. Johnson, Milton Friedman, and Friedrich Hayek explored the links between the growth in money supply and the acceleration of inflation. They argued that tight control of money-supply growth was a far more effective way of squeezing inflation out of the system than were demand-management policies. Monetary policy is still used as a means of controlling a national economy’s cyclical fluctuations. What made you want to look up "monetary policy"? Please share what surprised you most...
The polynomial division which involves the division of any two polynomials. The division of polynomials can be between two monomials, a polynomial and a monomial or between two polynomials. Before discussing on how to divide polynomials, a brief introduction to polynomials is given below. A polynomial is an algebraic expression of the type anxn + an−1xn−1+…………………a2x2 + a1x + a0, where “n” is either 0 or positive variables and real coefficients. In this expression, an, an−1…..a1,a0 are coefficients of the terms of the polynomial. The highest power of x in the above expression is known as the degree of the polynomial. If p(x) represents a polynomial and x = k such that p(k) = 0 then k is the root of the given polynomial. Example: Given a polynomial equation, p(x)=x2–x–2. Find the zeros of the equation. Given Polynomial, p(x)=x2–x–2 Zeros of the equation is given by: Thus, -1 and 2 are zeros of the given polynomial. It is to be noted that the highest power(degree) of the polynomial gives the number of zeros of the polynomial. Division of Polynomial: The division is the process of splitting a quantity into equal amounts. In terms of mathematics, the process of repeated subtraction or the reverse operation of multiplication is termed as division. For example, when 20 is divided by 4 we get 5 as the result since 4 is subtracted 5 times from 20. The four basic operations viz. addition, subtraction, multiplication and division can also be performed on algebraic expressions. Let us discuss dividing polynomials and algebraic expressions. Types of Polynomial Division For dividing polynomials, generally three cases can arise: - Division of a monomial by another monomial - Division of a polynomial by monomial - Division of a polynomial by another polynomial Let us discuss all these cases one by one: Division of a monomial by another monomial Consider the algebraic expression 40x2 is to be divided by 10x then 40x2/10x = (2×2×5×2×x×x)/(2×5×x) Since 2, 5 and x are common in both the numerator and the denominator. Hence, 40x2/10x = 4x Division of a polynomial by monomial The second case is when a polynomial is to be divided by a monomial. For dividing polynomials, each term of the polynomial is separately divided by the monomial (as described above) and the quotient of each division is added to get the result. Consider the following example: Example: Divide 24x3 – 12xy + 9x by 3x. The expression 24x3 – 12xy + 9x has three terms viz. 24×3, – 12xy and 9x. For dividing polynomials, each term is separately divided as: (24x3–12xy+9x)/3x = (24x3/3x)–(12xy/3x)+(9x/3x) Division of a polynomial by polynomial For dividing a polynomial with another polynomial, the polynomial is written in standard form i.e. the terms of the dividend and the divisor are arranged in decreasing order of their degrees. Let us take an example. Example: Divide 3x3 – 8x + 12 by x – 1. The Dividend is 3x3 – 8x + 12 and the divisor is x – 1. After this, the leading term of the dividend is divided by the leading term of the divisor i.e. 3x3 ÷ x =3x2. This result is multiplied by the divisor i.e. 3x2(x -1) = 3x3 -3x2 and it is subtracted from the divisor. Now again, this result is treated as a dividend and the same steps are repeated until the remainder becomes zero or its degree becomes less than that of the divisor as shown below.
Atoms are the basic building blocks of matter. They are incredibly small particles that have been studied for centuries. Scientists have long known that atoms contain a small, dense nucleus at their center. However, the question of whether the nucleus is bigger than the rest of the atom is not as straightforward as it may seem at first glance. In this blog post, we will explore the science behind the size of atoms and their nuclei. The Structure of an Atom Before diving deeper into the discussion, let’s briefly review the structure of an atom. Atoms are made up of three main subatomic particles: protons, neutrons, and electrons. Protons and neutrons are located in the nucleus, while electrons orbit around the nucleus in shells or energy levels. The nucleus is the central part of an atom. It contains protons, which have a positive charge, and neutrons, which have no charge. The number of protons in an atom’s nucleus determines the element to which the atom belongs. For example, all atoms with six protons are carbon atoms, and all atoms with one proton are hydrogen atoms. The Size of an Atom The size of an atom is determined by the size of its electron cloud. An electron cloud is the region around the nucleus where electrons are most likely to be found. The size of the electron cloud is influenced by the number of electrons in an atom, as well as the energy levels of those electrons. Because electrons are negatively charged, they are attracted to the positively charged nucleus. However, they are also repelled by other electrons in the atom. As a result, the electrons in an atom are in constant motion, which creates the electron cloud. The diameter of an atom is typically measured using the van der Waals radius. This radius is defined as the distance between the nuclei of two adjacent atoms, where the attractive forces between the atoms are at a maximum. The van der Waals radius varies depending on the element and the state of the atom. The Size of the Nucleus As mentioned earlier, the nucleus is the central part of an atom and contains protons and neutrons. While the electron cloud determines the size of the atom, the size of the nucleus is determined by the number of protons and neutrons it contains. The diameter of a nucleus is typically measured using the nuclear radius. This radius is defined as the distance from the center of the nucleus to its outer edge. The nuclear radius also varies depending on the element and the state of the nucleus. In general, the diameter of a nucleus is much smaller than the diameter of an atom. For instance, the diameter of a carbon atom is approximately 0.2 nanometers, while the diameter of a carbon nucleus is roughly 0.000000001 nanometers. This means that the nucleus is about 100,000 times smaller than the atom. In conclusion, while the nucleus is an important part of the atom, it is much smaller than the rest of the atom. The size of an atom is determined by the size of its electron cloud, while the size of the nucleus is determined by the number of protons and neutrons it contains. Understanding the structure and size of atoms is essential for understanding the properties and behavior of matter. If you are interested in learning more about atoms and other scientific concepts, there are plenty of resources available online. For example, check out this video from Khan Academy on the structure of an atom: https://www.youtube.com/watch?v=8g1Xmj5YGK8. Is a nucleus smaller than an atom? An atom is made up of a nucleus at its center, surrounded by electrons that orbit around it. The nucleus is a tiny, dense core that contains most of the atom’s mass. Inside the nucleus, there are positively charged particles called protons and electrically neutral particles called neutrons. The size of an atom is determined by the size of its electron cloud or the area of space in which the electrons can be found. On the other hand, the nucleus of an atom is extremely small compared to the whole atom. In fact, the nucleus is smaller than the atom itself by several orders of magnitude. To put this into perspective, the nucleus of an atom contains more than 99.9% of an atom’s mass, but it is less than one ten-thousandth of the atom’s size. This means that the vast majority of an atom’s space is empty, with only a tiny, dense nucleus at its center. The reason for this size difference between the nucleus and the rest of the atom is due to the way that particles are arranged in the atom. Electrons exist in shells surrounding the nucleus, which allows them to be far apart from each other and from the nucleus itself. The protons and neutrons, on the other hand, are densely packed together in the nucleus, giving it a much smaller size. The nucleus of an atom is indeed smaller than the atom itself. This is due to the fact that the electron cloud surrounding the nucleus occupies much of the atom’s space, while the nucleus itself is a tiny, dense core containing the majority of the atom’s mass. The size difference between the nucleus and the atom is significant, but it is important for understanding the structure and behavior of matter at the atomic level. How big is the nucleus compared to the rest of the cell? The size of a cell can vary dramatically depending on what type of cell we are looking at. For example, an average human red blood cell is only about 8 micrometers in diameter, while a nerve cell in the human brain can be up to 100 micrometers in diameter. Within a cell, the nucleus is an especially important structure. This is where the cell’s DNA, which contains the genetic information that is passed down from generation to generation, is stored. The DNA itself is very small – a DNA double helix is only about 10 nanometers wide. However, to fit all of a cell’s DNA inside such a small space, it needs to be packaged in a very specific way. This packaging happens thanks to proteins that attach to the DNA and help wind it up into compact structures called chromosomes. The nucleus that surrounds the chromosomes is significantly larger than the chromosomes themselves. In fact, it can be up to 1000 times bigger than the DNA it contains. A typical nucleus might be about 10 micrometers in diameter, although this can vary depending on the type of cell and the point of the cell cycle. So, to answer the original question, we can say that the nucleus is much larger than the DNA it contains – and, by extension, larger than many other structures within the cell. However, it’s important to note that the size of a cell and its structures is relative. A neuron with a large nucleus might still be much smaller overall than a muscle cell with a smaller nucleus, but a lot more organelles. Understanding the size and relationship of cell structures is an important part of studying cell biology. Which is larger a nucleus or proton? Atoms are the basic building blocks of all matter, and they are composed of even smaller particles known as protons, neutrons, and electrons. The nucleus is the central core of the atom, which contains protons and neutrons, while electrons orbit the nucleus in shells or energy levels. The question of whether a nucleus or a proton is larger is a common one, and the answer depends on how you define “larger.” If you’re talking about sheer physical size, then the answer is clear: the nucleus is larger than the proton. The reason for this is that the nucleus contains both protons and neutrons, while the proton is just one of the two particles in the nucleus. To get a sense of the relative sizes of these particles, imagine that an atom were the size of a pond. In this scenario, the nucleus would be like a tiny object floating in the middle of the pond, while the electrons would be zipping around the edges. The protons and neutrons inside the nucleus, in turn, occupy a space about 100,000 times smaller than the entire atom. This means that the nucleus is much smaller than the overall size of the atom, but still larger than a single proton. Of course, there are other ways to define “size” beyond physical dimensions. For example, you might argue that the proton is larger than the nucleus in terms of its importance or influence in chemical reactions and other processes. Indeed, protons play a crucial role in determining the properties of atoms, such as their electrical charge, atomic number, and ability to interact with other atoms. The answer to whether a nucleus or proton is larger depends on how you define “larger.” In terms of physical size, the nucleus is larger than a single proton because it contains both protons and neutrons. However, in terms of importance and influence on the behavior of atoms, protons are just as important as the nucleus, if not more so.
Prehistory | Ancient Bronze Age | The Shang Dynasty (ca. 1570 BCE- 1045 BCE) | The Zhou Dynasty (1045- 256 BCE) | Western Zhou (1045 BCE- 771 BCE) | Eastern Zhou (770 BCE- 256 BCE) | Spring and Autumn Period (770 BCE- 403 BCE) | The Hundred Schools of Thought | Warring States Period (403 BCE- 221 BCE) | The First Imperial Period (221 BCE- 220 CE) | Era of Disunity (220 CE- 589 CE) | Restoration of Empire (589 CE- 1279 CE) During the long Paleolithic period, bands of predatory hunter-gatherers lived in what is now China. Homo erectus, an extinct species closely related to modern humans, or Homo sapiens, appeared in China more than one million years ago. Anthropologists disagree about whether Homo erectus is the direct ancestor of Homo sapiens or merely related through a mutual ancestor. In either case, modern humans may have first appeared in China as far back as 200,000 years ago. Beginning in about 10,000 BCE, humans in China began developing agriculture, possibly influenced by developments in Southeast Asia. By 5000 BCE there were Neolithic village settlements in several regions of China. On the fine, wind-blown loess soils of the north and northwest, the primary crop was millet, while villages along the lower Yangtze River in Central China were centered on rice production in paddy fields, supplemented by fish and aquatic plants. Humans in both regions had domesticated pigs, dogs, and cattle, and by 3000 BCE sheep had become important in the north and water buffalo in the south. Over the course of the 5th to 3rd millennia BCE, many distinct, regional Neolithic cultures emerged. In the northwest, for instance, people made red pottery vessels decorated in black pigment with designs such as spirals, sawtooth lines, and zoomorphic (animal-like) stick figures. During the same period, Neolithic cultures in the east produced pottery that was rarely painted but had distinctive shapes, such as three-legged, deep-bodied tripods. Archaeologists have uncovered numerous jade ornaments, blades, and ritual objects in several eastern sites, but jade is rare in western ones. In many areas, stamped-earth fortified walls came to be built around settlements, suggesting not only increased contact between settlements but also increased conflict. Later Chinese civilization probably evolved from the interaction of many distinct Neolithic cultures, which over time came to share more in the way of material culture and social and cultural practices. For example, many burial practices, including the use of coffins and ramped chambers, spread way beyond their place of origin. Ancient Chinese historians knew nothing of their Neolithic forebears, whose existence was discovered by 20th-century archaeologists. Traditionally, the Chinese traced their history through many dynasties to a series of legendary rulers, like the Yellow Lord (Huang Di), who invented the key features of civilization- agriculture, the family, silk, boats, carts, bows and arrows, and the calendar. The last of these kings was Yu, and when he died the people chose his son to lead them, thus establishing the principle of hereditary, dynastic rule. Yu’s descendants created the Xia dynasty (ca. 2205 BCE- 1570 BCE), which was said to have lasted for 14 generations before declining and being superseded by the Shang dynasty. The Xia dynasty may correspond to the first phases of the transition to the Bronze Age. Between 2000 BCE and 1600 BCE a more complex Bronze Age civilization emerged out of the diverse Neolithic cultures in northern China. This civilization was marked by writing, metalwork, domestication of horses, a class system, and a stable political and religious hierarchy. Although Bronze Age civilizations developed earlier in Southwest Asia, China seems to have developed both its writing system and its bronze technology with relatively little stimulus from outside. However, other elements of early Chinese civilization, such as the spoke-wheeled horse chariot, apparently reached China indirectly from places to the west. No written documents survive to link the earliest Bronze Age sites unambiguously to Xia. With the Shang dynasty, however, the historical and archaeological records begin to coincide. Chinese accounts of the Shang rulers match inscriptions on animal bones and tortoise shells found in the 20th century at the city of Anyang in the valley of the Huang He (Yellow River). Archaeological remains provide many details about Shang civilization. A king was the religious and political head of the society. He ruled through dynastic alliances; divination (his subjects believed that he alone could predict the future by interpreting cracks in animal bones); and royal journeys, hunts, and military campaigns that took him to outlying areas. The Shang were often at war with neighboring peoples and moved their capital several times. Shang kings could mobilize large armies for warfare and huge numbers of workers to construct defensive walls and elaborate tombs. The Shang directly controlled only the central part of China proper, extending over much of modern Henan, Hubei, Shandong, Anhui, Shanxi, and Hebei provinces. However, Shang influence extended beyond the state’s borders, and Shang art motifs are often found in artifacts from more-distant regions. The Shang king’s rule was based equally on religious and military power. He played a priestly role in the worship of his ancestors and the high god Di. The king made animal sacrifices and communicated with his ancestors by interpreting the cracks on heated cattle bones or tortoise shells that had been prepared by professional diviners. Royal ancestors were viewed as able to intervene with Di, send curses, produce dreams, and assist the king in battle. Kings were buried with ritual vessels, weapons, jades, and numerous servants and sacrificial victims, suggesting that the Shang believed in some form of afterlife. The Shang used bronze more for purposes of ritual than war. Although some weapons were made of bronze, the great bulk of the surviving Shang bronze objects are cups, goblets, steamers, and cauldrons, presumably made for use in sacrificial rituals. They were beautifully formed in a great variety of shapes and sizes and decorated with images of wild animals. As many as 200 of these bronze vessels might be buried in a single royal grave. The bronze industry required centralized coordination of a large labor force to mine, refine, and transport copper, tin, and lead ores, as well as to produce and transport charcoal. It also required technically skilled artisans to make clay models, construct ceramic molds, and assemble and finish vessels, the largest which weighed as much as 800 kg (1,800 lb). The writing system used by the Shang is the direct ancestor of the modern Chinese writing system, with symbols or characters for each word. This writing system would evolve over time, but it never became a purely phonetic system like the Roman alphabet, which uses symbols (letters) to represent specific sounds. Thus mastering the written language required learning to recognize and write several thousand characters, making literacy a highly specialized skill requiring many years to master fully. In the 11th century BCE a frontier state called Zhou rose against and defeated the Shang dynasty. The Zhou dynasty is traditionally divided into two periods: the Western Zhou (ca. 1045 BCE- 771 BCE), when the capital was near modern Xi’an in the west, and the Eastern Zhou (770 BCE- 256 BCE), when the capital was moved further east to modern Luoyang. The Easter Zhou is divided into two sub- periods: The Spring and Autumn Period (770 BCE- 403 BCE) and the Warring States Period (403 BCE- 221 BCE), which are collectively referred to as 'China's Golden Age'. Like the Shang kings, the Zhou kings sacrificed to their ancestors, but they also sacrificed to Heaven (Tian). The Shu jing (Book of History), one of the earliest transmitted texts, describes the Zhou’s version of their history. It assumes a close relationship between Heaven and the king, called the Son of Heaven, explaining that Heaven gives the king a mandate to rule only as long as he does so in the interest of the people. Because the last Shang king had been decadent and cruel, Heaven withdrew the Mandate of Heaven (Tian Ming) from him and entrusted it to the virtuous Zhou kings. The Shu jing praises the first three Zhou rulers: King Wen (the Cultured King) expanded the Zhou domain; his son, King Wu (the Martial King), conquered the Shang; and King Wu's brother, Zhou Gong (often referred to as Duke of Zhou), consolidated the conquest and served as loyal regent for Wu’s heir. The Shi jing (Book of Poetry) offers another glimpse of life in early Zhou China. Its 305 poems include odes celebrating the exploits of the early Zhou rulers, hymns for sacrificial ceremonies, and folk songs. The folk songs are about ordinary people in everyday situations, such as working in fields, spinning and weaving, marching on campaigns, and longing for lovers. In these books, which became classics of the Confucian tradition, the Western Zhou dynasty is described as an age when people honored family relationships and stressed social status distinctions. The early Zhou rulers did not attempt to exercise direct control over the entire region they conquered. Instead, they secured their position by selecting loyal supporters and relatives to rule walled towns and the surrounding territories. Each of these local rulers, or vassals, was generally able to pass his position on to a son, so that in time the domain became a hereditary vassal state. Within each state, there were noble houses holding hereditary titles. The rulers of the states and the members of the nobility were linked both to one another and to their ancestors by bonds of obligation based on kinship. Below the nobility were the officers (shi) and the peasants, both of which were also hereditary statuses. The relationship between each level and its superiors was conceived as a moral one. Peasants served their superiors, and their superiors looked after the peasants’ welfare. Social interaction at the upper levels was governed by li, a set of complex rules of social etiquette and personal conduct. Those who practiced li were considered civilized; those who did not, such as those outside the Zhou realm, were considered barbarians. The Zhou kings maintained control over their vassals for more than two centuries, but as the generations passed, the ties of kinship and vassalage weakened. In 770 BCE several of the states rebelled and joined with non-Chinese forces to drive the Zhou from their capital. The Zhou established a new capital to the east at Chengzhou (near present-day Luoyang), where they were safer from barbarian attack, but the Eastern Zhou kings no longer exercised much political or military authority over the vassal states. In the Eastern Zhou period, real power lay with the larger states, although the Zhou kings continued as nominal overlords, partly because they were recognized as custodians of the Mandate of Heaven, but also because no single feudal state was strong enough to dominate the others. The Eastern Zhou period witnessed various social and economic advances. The use of iron-tipped, ox-drawn plows and improved irrigation techniques produced higher agricultural yields. This in turn supported a steady population increase. Other economic advances included the circulation of coins for money, the beginning of private ownership of land, and the growth of cities. Military technology also advanced. The Zhou developed the crossbow and methods of siege warfare, and adopted cavalry warfare from nomads to the north. Social changes were just as important, particularly the breakdown of old class barriers and the development of conscripted infantry armies. To maintain and increase power, state rulers sought the advice of teachers and strategists. This fueled intellectual activity and debate, and intense reappraisal of traditions. Though this time in Chinese history was marked by disunity and civil strife, an unprecedented era of cultural prosperity- the "golden age" of China flourished. The atmosphere of reform and new ideas was attributed to the struggle for survival among warring regional lords who competed in building strong and loyal armies and in increasing economic production to ensure a broader base for tax collection. To effect these economic, military, and cultural developments, the regional lords needed ever-increasing numbers of skilled, literate officials and teachers, the recruitment of whom was based on merit. Also during this time, commerce was stimulated through the introduction of coinage and technological improvements. Iron came into general use, making possible not only the forging of weapons of war but also the manufacture of farm implements. Public works on a grand scale, such as flood control, irrigation projects, and canal digging, were executed. Enormous walls were built around cities and along the broad stretches of the northern frontier. So many different philosophies developed during the late Spring and Autumn and early Warring States periods that the era is often known as the time when the “hundred schools of thought contended.” From the Hundred Schools of Thought came many of the great classical writings on which Chinese practices were to be based for the next two and one half millennia. Many of the thinkers were itinerant intellectuals who, besides teaching their disciples, were employed as advisers to one or another of the various state rulers on the methods of government, war, and diplomacy. There were thinkers fascinated by logical puzzles; utopians and hermits who argued for withdrawal from public life; agriculturists who argued that no one should eat who does not plough; military theorists who analyzed ways to deceive the enemy; and cosmologists who developed theories of the forces of nature, including the opposite and complementary forces of yin and yang. The three most influential schools of thought that evolved during this period were Confucianism, Daoism, and Legalism. The body of thought that had the most enduring effect on subsequent Chinese life was that of the School of Literati (ru), often called the Confucian school in the West. The written legacy of the School of Literati is embodied in the Confucian Classics, which were to become the basis for the order of traditional society. Kongfuzi, or Confucius as he is known in the West, lived from 551 BCE- 479 BCE. Also called Kong Zi, or Master Kong, Confucius was a teacher from the state of Lu (in present-day Shandong Province), revered tradition and looked to the early days of Zhou rule for an ideal social and political order. He believed that the only way such a system could be made to work properly was for each person to act according to prescribed relationships. "Let the ruler be a ruler and the subject a subject," he said, but he added that to rule properly a king must be virtuous. To Confucius, the functions of government and social stratification were facts of life to be sustained by ethical values. Confucius exalted virtues such as filial piety (reverent respect and obedience toward parents and grandparents), humanity (an unselfish concern for the welfare of others), integrity, and a sense of duty. His ideal was the junzi (ruler's son), which he redefined to mean gentleman; a man of moral cultivation was a superior man, rather than a man of noble birth. He repeatedly urged his students to aspire to be gentlemen who pursue integrity and duty, rather than petty men who pursue personal gain. Confucius’s teachings are known through the Lunyu (Analects), a collection of his conversations compiled by his followers after his death. He encouraged his disciples to master historical records, music, poetry, and ritual. He tried in vain to gain high office, traveling from state to state with his disciples in search of a ruler who would employ him. Confucius talked repeatedly of his vision of a more perfect society in which rulers and subjects, nobles and commoners, parents and children, and men and women would wholeheartedly accept the parts assigned to them, devoting themselves to their responsibilities to others. There were to be accretions to the corpus of Confucian thought, both immediately and over the millennia, and from within and outside the Confucian school. Interpretations made to suit or influence contemporary society made Confucianism dynamic while preserving a fundamental system of model behavior based on ancient texts. The eventual success of Confucian ideas owes much to Confucius's followers in the two centuries after his death, particularly to Mencius and Xun Zi. Mencius (372 BCE- 289 BCE), or Meng Zi, was a Confucian disciple who made major contributions to the humanism of Confucian thought. Mencius, like Confucius, traveled to various states, offering advice to their rulers. He expostulated the idea that a ruler who governed benevolently would earn the respect of the people and would unify the realm; a ruler could not govern without the people's tacit consent and that the penalty for unpopular, despotic rule was the loss of the "mandate of heaven." Mencius proposed concrete political and financial measures for easing tax burdens and otherwise improving the people's lot. With his disciples and fellow philosophers, he discussed other issues in moral philosophy. Mencius declared that man was by nature good, arguing strongly, that everyone is born with the capacity to recognize what is right and act upon it. The effect of the combined work of Confucius, the codifier and interpreter of a system of relationships based on ethical behavior, and Mencius, the synthesizer and developer of applied Confucian thought, was to provide traditional Chinese society with a comprehensive framework on which to order virtually every aspect of life. Diametrically opposed to Mencius, for example, was the interpretation of Xun Zi (ca. 300 BCE-237 BCE), another Confucian follower. Xun Zi preached that man is innately selfish and evil and that goodness is attainable only through conduct befitting one's status and education, that they learn to put moral principle above their own interests. He also argued that the best government is one based on authoritarian control, not ethical or moral persuasion. Xun Zi stressed the importance of ritual to social and political life, but took a secular view of it. For instance, Xun Zi argued that the ruler should pray for rain during a drought because to do so is the traditional ritual, not because it moves Heaven to send rain. Xun Zi's unsentimental and authoritarian inclinations were developed into the doctrine embodied in the School of Law (fa), or Legalism. Legalism differed from both Confucianism and Daoism in its narrow focus on statecraft. The doctrine was formulated by Han Fei Zi (ca. 280 BCE- 233 BCE) and Li Si (d. 208 BCE), who reasoned that the extreme disorders of their day called for new and drastic measures. They argued that social order depended on effective systems of rewards and punishments, by rejecting the Confucian theory that strong government depended on the moral quality of the ruler and his officials and their success in winning over the people. To ensure his power, the ruler had to keep his officials in line with strict rules and regulations and his people obedient with predictably enforced laws. The Legalists exalted the state and sought its prosperity and martial prowess above the welfare of the common people. Legalism became the philosophic basis for the imperial form of government. When the most practical and useful aspects of Confucianism and Legalism were synthesized in the Han period (206 BCE- CE 220), a system of governance came into existence that was to survive largely intact until the late 19th century. The doctrines of Taoism (Daoism), the second great school of philosophy that emerged during the Warring States Period, also developed during the Zhou period and set forth in the Daodejing (Classic of the Way and Its Power), which is attributed traditionally to the legendary sage Lao Zi (ca. 579 BCE- 490 BCE), or Old Master, and in the compiled writings of Zhuangzi (369 BCE- 286 BCE). Both works share a disapproval of the unnatural and artificial. Whereas plants and animals act spontaneously in the ways appropriate to them, humans have separated themselves from the Way (Dao) by plotting and planning, analyzing and organizing. Both texts reject social conventions and call for an ecstatic surrender to the spontaneity of cosmic processes. At the political level, Daoism advocated a return to primitive agricultural communities, in which life could follow the most natural course. Government policy should be one of extreme noninterference, permitting the people to respond to nature spontaneously. The Zhuangzi is much longer than the Daodejing. A literary masterpiece, it is full of tall tales, parables, and fictional encounters between historical figures. Zhuangzi poked fun at people mired in everyday affairs and urged people to see death as part of the natural cosmic processes. The focus of Taoism is the individual in nature rather than the individual in society. It holds that the goal of life for each individual is to find one's own personal adjustment to the rhythm of the natural (and supernatural) world, to follow the Dao of the universe. In many ways the opposite of rigid Confucian moralism, Taoism served many of its adherents as a complement to their ordered daily lives. A scholar on duty as an official would usually follow Confucian teachings but at leisure or in retirement might seek harmony with nature as a Taoist recluse. Another strain of thought dating to the Warring States Period is the school of yin-yang and the five elements. The theories of this school attempted to explain the universe in terms of basic forces in nature, the complementary agents of yin (dark, cold, female, negative) and yang (light, hot, male, positive) and the five elements (water, fire, wood, metal, and earth). In later periods these theories came to have importance both in philosophy and in popular belief. Still another school of thought was based on the doctrine of Mo Zi (ca. 470 BCE- 391 BCE), or Mo Di. Mo Zi believed that "all men are equal before God" and that mankind should follow heaven by practicing universal love. Advocating that all action must be utilitarian, Mo Zi condemned the Confucian emphasis on ritual and music. He regarded warfare as wasteful and advocated pacificism. Mo Zi also believed that unity of thought and action were necessary to achieve social goals. He maintained that the people should obey their leaders and that the leaders should follow the will of heaven. Although Moism failed to establish itself as a major school of thought, its views are said to be "strongly echoed" in Legalist thought. In general, the teachings of Mo Zi left an indelible impression on the Chinese mind. THE IMPERIAL ERA Much of what came to constitute China Proper was unified for the first time in 221 BCE In that year the western frontier state of Qin, the most aggressive of the Warring States, subjugated the last of its rival states. (Qin in Wade-Giles Romanization is Ch'in, from which the English China probably derived.) Once the king of Qin consolidated his power, he took the title Shi Huangdi (First Emperor), a formulation previously reserved for deities and the mythological sage-emperors, and imposed Qin's centralized, nonhereditary bureaucratic system on his new empire. In subjugating the six other major states of Eastern Zhou, the Qin kings had relied heavily on Legalist scholar-advisers. Centralization, achieved by ruthless methods, was focused on standardizing legal codes and bureaucratic procedures, the forms of writing and coinage, and the pattern of thought and scholarship. To silence criticism of imperial rule, the kings banished or put to death many dissenting Confucian scholars and confiscated and burned their books. Qin aggrandizement was aided by frequent military expeditions pushing forward the frontiers in the north and south. To fend off barbarian intrusion, the fortification walls built by the various warring states were connected to make a 5,000-kilometer-long great wall. (What is commonly referred to as the Great Wall is actually four great walls rebuilt or extended during the Western Han, Sui, Jin, and Ming periods, rather than a single, continuous wall. At its extremities, the Great Wall reaches from northeastern Heilongjiang Province to northwestern Gansu. A number of public works projects were also undertaken to consolidate and strengthen imperial rule. These activities required enormous levies of manpower and resources, not to mention repressive measures. Revolts broke out as soon as the first Qin emperor died in 210 BCE. His dynasty was extinguished less than twenty years after its triumph. The imperial system initiated during the Qin dynasty, however, set a pattern that was developed over the next two millennia. After a short civil war, a new dynasty, called Han (206 BCE- CE 220), emerged with its capital at Chang'an. The new empire retained much of the Qin administrative structure but retreated a bit from centralized rule by establishing vassal principalities in some areas for the sake of political convenience. The Han rulers modified some of the harsher aspects of the previous dynasty; Confucian ideals of government, out of favor during the Qin period, were adopted as the creed of the Han empire, and Confucian scholars gained prominent status as the core of the civil service. A civil service examination system also was initiated. Intellectual, literary, and artistic endeavors revived and flourished. The Han period produced China's most famous historian, Sima Qian (ca. 145 BCE- 87 BCE), whose Shiji (Historical Records) provides a detailed chronicle from the time of a legendary Xia emperor to that of the Han emperor Wu Di (141 BCE- 87 BCE). Technological advances also marked this period. Two of the great Chinese inventions, paper and porcelain, date from Han times. The Han dynasty, after which the members of the ethnic majority in China, the "people of Han," are named, was notable also for its military prowess. The empire expanded westward as far as the rim of the Tarim Basin (in modern Xinjiang-Uyghur Autonomous Region), making possible relatively secure caravan traffic across Central Asia to Antioch, Baghdad, and Alexandria. The paths of caravan traffic are often called the "silk route" because the route was used to export Chinese silk to the Roman Empire. Chinese armies also invaded and annexed parts of northern Vietnam and northern Korea toward the end of the 2nd century BCE Han control of peripheral regions was generally insecure, however. To ensure peace with non-Chinese local powers, the Han court developed a mutually beneficial "tributary system." Non-Chinese states were allowed to remain autonomous in exchange for symbolic acceptance of Han overlordship. Tributary ties were confirmed and strengthened through intermarriages at the ruling level and periodic exchanges of gifts and goods. After 200 years, Han rule was interrupted briefly (in 9 CE- 24 CE by Wang Mang, a reformer), and then restored for another 200 years. The Han rulers, however, were unable to adjust to what centralization had wrought: a growing population, increasing wealth and resultant financial difficulties and rivalries, and ever-more complex political institutions. Riddled with the corruption characteristic of the dynastic cycle, by 220 CE the Han empire collapsed. The collapse of the Han dynasty was followed by nearly four centuries of rule by warlords. The age of civil wars and disunity began with the era of the Three Kingdoms (Wei, Shu, and Wu, which had overlapping reigns during the period 220 CE- 80 CE). In later times, fiction and drama greatly romanticized the reputed chivalry of this period. Unity was restored briefly in the early years of the Jin dynasty (265 CE- 420 CE), but the Jin could not long contain the invasions of the nomadic peoples. In 317 CE, the Jin court was forced to flee from Luoyang and reestablished itself at Nanjing to the south. The transfer of the capital coincided with China's political fragmentation into a succession of dynasties that was to last from 304 CE to 589 CE. During this period the process of sinicization accelerated among the non-Chinese arrivals in the north and among the aboriginal tribesmen in the south. This process was also accompanied by the increasing popularity of Buddhism (introduced into China in the 1st century CE) in both north and south China. Despite the political disunity of the times, there were notable technological advances. The invention of gunpowder (at that time for use only in fireworks) and the wheelbarrow is believed to date from the 6th or 7th century. Advances in medicine, astronomy, and cartography are also noted by historians. China was reunified in 589 CE by the short-lived Sui dynasty (581 CE- 617 CE), which has often been compared to the earlier Qin dynasty in tenure and the ruthlessness of its accomplishments. The Sui dynasty's early demise was attributed to the government's tyrannical demands on the people, who bore the crushing burden of taxes and compulsory labor. These resources were overstrained in the completion of the Grand Canal--a monumental engineering feat--and in the undertaking of other construction projects, including the reconstruction of the Great Wall. Weakened by costly and disastrous military campaigns against Korea in the early seventh century, the dynasty disintegrated through a combination of popular revolts, disloyalty, and assassination. The Tang dynasty (618 CE- 907 CE), with its capital at Chang'an, is regarded by historians as a high point in Chinese civilization--equal, or even superior, to the Han period. Its territory, acquired through the military exploits of its early rulers, was greater than that of the Han. Stimulated by contact with India and the Middle East, the empire saw a flowering of creativity in many fields. Buddhism, originating in India around the time of Confucius, flourished during the Tang period, becoming thoroughly sinicized and a permanent part of Chinese traditional culture. Block printing was invented, making the written word available to vastly greater audiences. The Tang period was the golden age of literature and art. A government system supported by a large class of Confucian literati selected through civil service examinations was perfected under Tang rule. This competitive procedure was designed to draw the best talents into government. But perhaps an even greater consideration for the Tang rulers, aware that imperial dependence on powerful aristocratic families and warlords would have destabilizing consequences, was to create a body of career officials having no autonomous territorial or functional power base. As it turned out, these scholar-officials acquired status in their local communities, family ties, and shared values that connected them to the imperial court. From Tang times until the closing days of the Qing empire in 1911, scholar-officials functioned often as intermediaries between the grass-roots level and the government. By the middle of the 8th century CE, Tang power had ebbed. Domestic economic instability and military defeat in 751 by Arabs at Talas, in Central Asia, marked the beginning of five centuries of steady military decline for the Chinese empire. Misrule, court intrigues, economic exploitation, and popular rebellions weakened the empire, making it possible for northern invaders to terminate the dynasty in 907. The next half-century saw the fragmentation of China into five northern dynasties and ten southern kingdoms. But in 960 a new power, Song (960- 1279), reunified most of China Proper. The Song period divides into two phases: Northern Song (960- 1127) and Southern Song (1127- 1279). The division was caused by the forced abandonment of north China in 1127 by the Song court, which could not push back the nomadic invaders. The founders of the Song dynasty built an effective centralized bureaucracy staffed with civilian scholar-officials. Regional military governors and their supporters were replaced by centrally appointed officials. This system of civilian rule led to a greater concentration of power in the emperor and his palace bureaucracy than had been achieved in the previous dynasties. The Song dynasty is notable for the development of cities not only for administrative purposes but also as centers of trade, industry, and maritime commerce. The landed scholar-officials, sometimes collectively referred to as the gentry, lived in the provincial centers alongside the shopkeepers, artisans, and merchants. A new group of wealthy commoners--the mercantile class--arose as printing and education spread, private trade grew, and a market economy began to link the coastal provinces and the interior. Landholding and government employment were no longer the only means of gaining wealth and prestige. Culturally, the Song refined many of the developments of the previous centuries. Included in these refinements were not only the Tang ideal of the universal man, who combined the qualities of scholar, poet, painter, and statesman, but also historical writings, painting, calligraphy, and hard-glazed porcelain. Song intellectuals sought answers to all philosophical and political questions in the Confucian Classics. This renewed interest in the Confucian ideals and society of ancient times coincided with the decline of Buddhism, which the Chinese regarded as foreign and offering few practical guidelines for the solution of political and other mundane problems. The Song Neo-Confucian philosophers, finding a certain purity in the originality of the ancient classical texts, wrote commentaries on them. The most influential of these philosophers was Zhu Xi (1130- 1200), whose synthesis of Confucian thought and Buddhist, Taoist, and other ideas became the official imperial ideology from late Song times to the late 19th century. As incorporated into the examination system, Zhu Xi's philosophy evolved into a rigid official creed, which stressed the one-sided obligations of obedience and compliance of subject to ruler, child to father, wife to husband, and younger brother to elder brother. The effect was to inhibit the societal development of pre-modern China, resulting both in many generations of political, social, and spiritual stability and in a slowness of cultural and institutional change up to the nineteenth century. Neo-Confucian doctrines also came to play the dominant role in the intellectual life of Korea, Vietnam, and Japan. By the mid-thirteenth century, the Mongols had subjugated north China, Korea, and the Muslim kingdoms of Central Asia and had twice penetrated Europe. With the resources of his vast empire, Kublai Khan (1215- 1294), a grandson of Genghis Khan (ca. 1167- 1227) and the supreme leader of all Mongol tribes, began his drive against the Southern Song. Even before the extinction of the Song dynasty, Kublai Khan had established the first alien dynasty to rule all China- the Yüan (1279-1368).
from a handpicked tutor in LIVE 1-to-1 classes Area of Rectangle Formula Class 7 The area of a rectangle is the total space covered by it. The formula for the area of a rectangle can be easily memorized if the underlying logic is understood well. This article presents a precise summary of the area of rectangle formula class 7 that will ensure that students understand the concept of calculating the area of rectangle and its application along with some effective tips to remember the area of rectangle formula for class 7. List of Area of Rectangle Formula Class 7 The following points present a summary of the concepts and formulas related to the area of a rectangle. - The opposite sides of a rectangle run parallel to each other, while the adjacent sides stand perpendicular to each other. - Area of a rectangle is basically the multiplication of the measurements of its two adjacent sides. - Area of rectangle = Length × Breadth - Since the calculation of area involves multiplication of two measurements, hence, a rectangle’s area is always represented in square units. - The perimeter of a rectangle is the total length of all its sides. - Perimeter of a rectangle = 2 × (Length + Breadth) Applications of Area of Rectangle Formula Class 7 Rectangles are the most commonly used shapes around us. The two main dimensions of a rectangle are its area and the perimeter. The following examples show how these parameters have their applications in the real world. - The ability to calculate a rectangle’s area simplifies numerous domestic tasks in everyday life. For example, to paint a rectangular wall, we need to calculate the area of the wall to determine how much paint is required. - For rectangular farms, farmers use the farm area to figure out how many seeds and manure they need. - In order to build a house on a rectangular piece of land, the area of land is required to estimate the quantity of the raw material. Tips to Memorize Area of Rectangle Formula Class 7 Sometimes memorizing formulas becomes difficult and students tend to forget them due to nervousness. The following points list a few tips to memorize these formulas easily. - The students must try to solve all the solved examples in their textbooks as these examples are meant to cover all the important basic concepts. After completing the solved examples they can move on to the exercise questions. Consistent practice will help the students memorize the area of rectangle formula class 7 on their fingertips without any effort of mugging them up. - Students must also download some formula images that can be found on the internet. They can use these as a screensaver on their digital gadgets. As a result, they will be reminded to glance at the formulas throughout the day anytime they use their device, resulting in a speedy revision. - Studying with friends can also help the students in clearing out their doubts with each other. This will not only make learning easy and fun but will also result in sharing of ideas regarding solving problems that will help them in examinations. Area of Rectangle Formula Class 7 Examples Example 1: What is the area of a rectangular square park whose length and breadth are 40 m and 35 m respectively. Solution: The area of a rectangle formula is given as Length × Breadth. Substituting the given values we get, Area of the rectangular square park = 40 × 35 = 1400 m2. Example 2: If the area of a rectangle is 80 square units and its length is 10 units, what is its breadth? Also, find the perimeter of the rectangle. Solution: We know the area of rectangle formula, Area of Rectangle = Length × breadth Given, area of rectangle = 80 square units, Length = 10 units Substituting the given values, we can find the breadth. Area of Rectangle = Length × breadth 80 = 10 × breadth Breadth = 8 units Now, we know the length and breadth of the rectangle so the perimeter of the rectangle can be easily calculated. Perimeter of Rectangle = 2 (Length + Breadth) = 2 (10 + 8) = 2 × 18 = 36 unit2 Students can download the printable Maths Formulas Class 7 sheet from below: FAQs on Area of Rectangle Formula Class 7 What are the Important Formulas for Area of Rectangle Formula Class 7? The important formulas that are used to calculate the area of rectangle formula class 7 are given below. - Area of rectangle = Length × Breadth. This formula is used when the length and breadth are known. - Perimeter of the rectangle = 2 × (Length + Breadth). This formula is used in those cases when one of the dimensions is not given. For example, if the perimeter and length is known we can find the breadth of the rectangle using this formula and then find the area. What are the Basic Formulas in Area of Rectangle Formula class 7? There are two basic formulas that are used to find the area of a rectangle. The first one is the direct one in which the length and breadth is known. Area of rectangle = Length × Breadth. The other one is of the perimeter of the rectangle which can be indirectly used to find the unknown value. Perimeter of rectangle = 2 × (Length + Breadth). After finding the unknown value, the area can be easily calculated. What are the important formulas covered in area of rectangle formula class 7? The important formulas covered in area of rectangle formula class 7 are explained in this article. The area of a rectangle can be calculated by simply multiplying its length and breadth, while its perimeter is the sum of all its outer sides that is equal to twice the sum of its length and breadth. While the first one is a direct formula to find the area in which the dimensions are known. Area of rectangle = Length × Breadth, the other one is of the perimeter of the rectangle which can be used to find the unknown value of the length or breadth. Perimeter of rectangle = 2 × (Length + Breadth). How Many Formulas are there in area of rectangle formula class 7? There are mainly two important formulas that are used to find the area of rectangle. One is the area of rectangle formula and the other one is perimeter of rectangle formula. The students must go through this article to get a clear understanding of the area and perimeter concept of the rectangle. This article also has some practical tips which if they follow will help them excel in their examinations. How can I Memorize area of rectangle formula class 7? The following tips can be used to memorize the area of rectangle formula class 7. - Students should attempt to solve all of the solved examples in their textbooks, as these examples are intended to cover all of the fundamental topics. - The students can also download formula images from the internet that can be used as a screensaver on their electronic devices. This will help them in having a consistent revision everyday. - Students might also benefit from studying with friends by clearing out their doubts with one another. This will not only make studying easier and more enjoyable, but it will also lead to the sharing of problem-solving ideas that will help them in their exams.
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 79 people, some anonymous, worked to edit and improve it over time. This article has been viewed 1,005,477 times. A rectangular prism is a name for a 6-sided 3-dimensional figure that is very familiar to everybody—a box. X Research source Think of a brick, or a shoebox, and you know exactly what a rectangular prism is. The surface area is the amount of space on the outside of the object. "How much paper do I need to wrap this shoebox" sounds a lot less complicated, but it's exactly the same math problem. Surface Area Help Part 1 of 2: Finding the Surface Area 1Label the length, width, and height of your rectangular prism. Each rectangular prism has a length, a width, and a height. Draw a picture of the prism, and write the symbols l, w, and h next to three different edges of the shape. - If you're not sure which sides to label, pick any corner. Label the three lines that meet at that corner. - For example: A box has a base that measures 3 inches by 4 inches, and it stands 5 inches tall. The long side of the base is 4 inches, so l = 4, w = 3, and h = 5. 2Look at the six faces of the prism. To cover the whole surface area, you'd need to paint six different "faces." Think about each one — or find a box of cereal and look at them directly: 3Find the area of the bottom face. To start out, let's find the surface area of just one face: the bottom. This is a rectangle, just like every face. One edge of the rectangle is labeled length and the other is labeled width. To find the area of the rectangle, just multiply the two edges together. X Research source Area (bottom edge) = length times width = lw. - Going back to our example, the area of the bottom face is 4 inches x 3 inches = 12 square inches. 4Find the area of the top face. Wait a second — we already noticed that the top and bottom faces are the same size. This must also have an area of lw. - In our example, the top area is also 12 square inches. 5Find the area of the front and back faces. Go back to your diagram and look at the front face: the one with one edge labeled width and one labeled height. The area of the front face = width times height = wh. The area of the back is also wh. - In our example, w = 3 inches and h = 5 inches, so the area of the front is 3 inches x 5 inches =15 square inches. The area of the back face is also 15 square inches. 6Find the area of the left and right faces. We've just got two faces left, each the same size. One edge is the length of the prism, and one edge is the height of the prism. The area of the left face is lh and the area of the right face is also lh. - In our example, l = 4 inches and h = 5 inches, so the area of the left face = 4 inches x 5 inches = 20 square inches. The area of the right face is also 20 square inches. 7Add the six areas together. Now you've found the area of each of the six faces. Add them all together to get the area of the whole shape: lw + lw + wh + wh + lh + lh. You can use this formula for any rectangular prism, and you will always get the surface area. - To finish our example, just add up all the blue numbers above: 12 + 12 + 15 + 15 + 20 + 20 = 94 square inches. Part 2 of 2: Making the Formula Shorter 1Simplify the formula. You now know enough to find the surface area of any rectangular prism. You can do it faster if you've learned some basic algebra. Start with our equation above: Area of a rectangular prism = lw + lw + wh + wh + lh + lh. If we combine all the terms that are the same, we get: - Area of a rectangular prism = 2lw + 2wh + 2lh 2Factor out the two. If you know how to factor in algebra, you can make it even shorter: - Area of a Rectangular Prism = 2lw + 2wh + 2lh = 2(lw + wh + lh). 3Test it on an example. Let's go back to our example box, with length 4, width 3, and height 5. Plug these numbers into the formula: - Area = 2(lw + wh + lh) = 2 x (lw + wh + lh) = 2 x (4x3 + 3x5 + 4x5) = 2 x (12 + 15 + 20) = 2 x (47) = 94 square inches. That's the same answer we got before. Once you've practiced doing these equations, this is a much faster way to find the surface area. QuestionHow do I find the surface area of one with no length or is represented by x?Top AnswererMultiply x by the width and then by the height. QuestionHow do I find the edge lengths for a rectangular prism with a surface area of 92 m?Top AnswererYou can't find them without having additional information. QuestionHow do I find the total surface area of a triangular prism?Community AnswerStart off with the formula for the area of a triangle: 1/2bh = a (One half of base times height equals area.) Also, you'll need to know how to find the area of a rectangle, lw = a (length times width equals area.) Make a net of the prism. If the length and width of the prism are say, l = 4 and w = 6, the bottom rectangle in the center should be 4 x 6 (area = 24 sq. units.). Next, do the other two rectangles (Cheat: They're always the same area as the base!) Now, find the area of the triangle. Say the height = 4. We know w = 6, so we multiply 4 x 6. Now we multiply that by 1/2 (divide by 2). Do the same for the other one, then add them up. QuestionHow do I find the surface area of a triangular pyramid?Community AnswerFind the surface area of each triangle (base x height), then add them together. If all the triangles are congruent, you can just find the surface area of one and multiply it by 4. QuestionWhat is the measure of each edge of a rectangle if the surface area is 600?Community AnswerWithout knowing more about the rectangular prism and the ratio of the side measurements, it would be difficult to calculate. However, if you are talking about a cube in particular, every side would measure 10 units, if the surface area is 600 units squared. QuestionIf the surface area, length, and width are given, how do I determine the height of a rectangular prism?Top AnswererMultiply length by width, and double it. That gives you the total area of the top and bottom surfaces of the prism. Subtract that from the total surface area you were given. That leaves you with the total area of the four sides of the prism. Set that number aside for a moment. Add the length and width together and double it. That gives you the perimeter around either the top or bottom surface. Divide that perimeter into the total area of the four sides you figured out a moment ago. That gives you the height. QuestionHow do I find the length (base) of a rectangular prism if I am given the height, width, and total surface area?Top AnswererMultiply the width by the height. Double that number. That gives the total surface area of the two smaller ends of the prism. Subtract that figure from the total surface area of the prism. That leaves you with the total area of the other four surfaces. Let's call them the four main surfaces. Add the height to the width and double that figure. That gives you the perimeter around the four main surfaces. Divide that perimeter into the total area of the four main surfaces . That gives you the length of the prism. QuestionHow do I write an equation representing the area of a rectangle?Top AnswererArea equals length multiplied by width. QuestionHow do I determine the area of the left side of a triangular prism when only given the area of the rectangle in front?Community AnswerYou can't without more information. QuestionWhat is the surface area of a right rectangular prism with dimensions of 4 cm by 4 cm by 6 cm?Community Answertop and bottom: 16 cm^2 ea. side front and back: 16cm^2 ea. side side and side: 24cm^2 ea. side Total surface area = 112 cm^2 - Areas always use "square units," like square inches or square centimeters. X Research source A square inch is just what it sounds like: a square that's one inch wide and one inch long. If a prism has a surface area of 50 square inches, that means it takes 50 of those squares to cover every surface on the prism. - Some teachers use "breadth" or "depth" instead of one of these names. That's fine, as long as you label each side clearly. - If you don't know which way up the prism is, you can call any side the height. The length is usually the longest side, but even that's not really important. As long as you stick with the same names for the whole problem, you'll be fine. X Research source - ↑ https://www.mathsisfun.com/definitions/rectangular-prism.html - ↑ http://www.math.com/tables/geometry/surfareas.htm - ↑ http://education.seattlepi.com/surface-area-rectangular-prism-fifth-graders-5826.html - ↑ https://www.aaamath.com/geo79_x9.htm - ↑ https://www.khanacademy.org/math/pre-algebra/measurement/area-basics/v/introduction-to-area-and-unit-squares - ↑ http://thinkmath.edc.org/resource/measurement-length-width-height-depth About This Article To find the surface area of a rectangular prism, measure the length, width, and height of the prism. Find the area of the top and bottom faces by multiplying the length and width of the prism. Then, calculate the area of the left and right faces by multiplying the width and height. Finally, find the area of the front and back faces by multiplying the length and height of the prism. To find the surface area, simply add all 6 of these areas together and write your result in square units. If you want to learn how to simplify your formulas to make them easier to remember, keep reading the article!
Nervous System AnatomyNervous Tissue The majority of the nervous system is tissue made up of two classes of cells: neurons and neuroglia. - Neurons. Neurons, also known as nerve cells, communicate within the body by transmitting electrochemical signals. Neurons look quite different from other cells in the body due to the many long cellular processes that extend from their central cell body. The cell body is the roughly round part of a neuron that contains the nucleus, mitochondria, and most of the cellular organelles. Small tree-like structures called dendrites extend from the cell body to pick up stimuli from the environment, other neurons, or sensory receptor cells. Long transmitting processes called axons extend from the cell body to send signals onward to other neurons or effector cells in the There are 3 basic classes of neurons: afferent neurons, efferent neurons, and interneurons. - Afferent neurons. Also known as sensory neurons, afferent neurons transmit sensory signals to the central nervous system from receptors in the body. - Efferent neurons. Also known as motor neurons, efferent neurons transmit signals from the central nervous system to effectors in the body such as muscles and glands. - Interneurons. Interneurons form complex networks within the central nervous system to integrate the information received from afferent neurons and to direct the function of the body through efferent neurons. - Neuroglia. Neuroglia, also known as glial cells, act as the “helper” cells of the nervous system. Each neuron in the body is surrounded by anywhere from 6 to 60 neuroglia that protect, feed, and insulate the neuron. Because neurons are extremely specialized cells that are essential to body function and almost never reproduce, neuroglia are vital to maintaining a functional nervous system. The brain, a soft, wrinkled organ that weighs about 3 pounds, is located inside the cranial cavity, where the bones of the skull surround and protect it. The approximately 100 billion neurons of the brain form the main control center of the body. The brain and spinal cord together form the central nervous system (CNS), where information is processed and responses originate. The brain, the seat of higher mental functions such as consciousness, memory, planning, and voluntary actions, also controls lower body functions such as the maintenance of respiration, heart rate, blood pressure, and digestion. Spinal Cord The spinal cord is a long, thin mass of bundled neurons that carries information through the vertebral cavity of the spine beginning at the medulla oblongata of the brain on its superior end and continuing inferiorly to the lumbar region of the spine. In the lumbar region, the spinal cord separates into a bundle of individual nerves called the cauda equina (due to its resemblance to a horse’s tail) that continues inferiorly to the sacrum and coccyx. The white matter of the spinal cord functions as the main conduit of nerve signals to the body from the brain. The grey matter of the spinal cord integrates reflexes to stimuli. Nerves are bundles of axons in the peripheral nervous system (PNS) that act as information highways to carry signals between the brain and spinal cord and the rest of the body. Each axon is wrapped in a connective tissue sheath called the endoneurium. Individual axons of the nerve are bundled into groups of axons called fascicles, wrapped in a sheath of connective tissue called the perineurium. Finally, many fascicles are wrapped together in another layer of connective tissue called the epineurium to form a whole nerve. The wrapping of nerves with connective tissue helps to protect the axons and to increase the speed of their communication within the body. - Afferent, Efferent, and Mixed Nerves. Some of the nerves in the body are specialized for carrying information in only one direction, similar to a one-way street. Nerves that carry information from sensory receptors to the central nervous system only are called afferent nerves. Other neurons, known as efferent nerves, carry signals only from the central nervous system to effectors such as muscles and glands. Finally, some nerves are mixed nerves that contain both afferent and efferent axons. Mixed nerves function like 2-way streets where afferent axons act as lanes heading toward the central nervous system and efferent axons act as lanes heading away from the central nervous - Cranial Nerves. Extending from the inferior side of the brain are 12 pairs of cranial nerves. Each cranial nerve pair is identified by a Roman numeral 1 to 12 based upon its location along the anterior-posterior axis of the brain. Each nerve also has a descriptive name (e.g. olfactory, optic, etc.) that identifies its function or location. The cranial nerves provide a direct connection to the brain for the special sense organs, muscles of the head, neck, and shoulders, the heart, and the GI tract. - Spinal Nerves. Extending from the left and right sides of the spinal cord are 31 pairs of spinal nerves. The spinal nerves are mixed nerves that carry both sensory and motor signals between the spinal cord and specific regions of the body. The 31 spinal nerves are split into 5 groups named for the 5 regions of the vertebral column. Thus, there are 8 pairs of cervical nerves, 12 pairs of thoracic nerves, 5 pairs of lumbar nerves, 5 pairs of sacral nerves, and 1 pair of coccygeal nerves. Each spinal nerve exits from the spinal cord through the intervertebral foramen between a pair of vertebrae or between the C1 vertebra and the occipital bone of the skull. The meninges are the protective coverings of the central nervous system (CNS). They consist of three layers: the dura mater, arachnoid mater, and pia mater. - Dura mater. The dura mater, which means “tough mother,” is the thickest, toughest, and most superficial layer of meninges. Made of dense irregular connective tissue, it contains many tough collagen fibers and blood vessels. Dura mater protects the CNS from external damage, contains the cerebrospinal fluid that surrounds the CNS, and provides blood to the nervous tissue of the CNS. - Arachnoid mater. The arachnoid mater, which means “spider-like mother,” is much thinner and more delicate than the dura mater. It lines the inside of the dura mater and contains many thin fibers that connect it to the underlying pia mater. These fibers cross a fluid-filled space called the subarachnoid space between the arachnoid mater and the pia mater. - Pia mater. The pia mater, which means “tender mother,” is a thin and delicate layer of tissue that rests on the outside of the brain and spinal cord. Containing many blood vessels that feed the nervous tissue of the CNS, the pia mater penetrates into the valleys of the sulci and fissures of the brain as it covers the entire surface of the CNS. The space surrounding the organs of the CNS is filled with a clear fluid known as cerebrospinal fluid (CSF). CSF is formed from blood plasma by special structures called choroid plexuses. The choroid plexuses contain many capillaries lined with epithelial tissue that filters blood plasma and allows the filtered fluid to enter the space around the brain. Newly created CSF flows through the inside of the brain in hollow spaces called ventricles and through a small cavity in the middle of the spinal cord called the central canal. CSF also flows through the subarachnoid space around the outside of the brain and spinal cord. CSF is constantly produced at the choroid plexuses and is reabsorbed into the bloodstream at structures called arachnoid villi. Cerebrospinal fluid provides several vital functions to the central nervous system: - CSF absorbs shocks between the brain and skull and between the spinal cord and vertebrae. This shock absorption protects the CNS from blows or sudden changes in velocity, such as during a car accident. - The brain and spinal cord float within the CSF, reducing their apparent weight through buoyancy. The brain is a very large but soft organ that requires a high volume of blood to function effectively. The reduced weight in cerebrospinal fluid allows the blood vessels of the brain to remain open and helps protect the nervous tissue from becoming crushed under its own weight. - CSF helps to maintain chemical homeostasis within the central nervous system. It contains ions, nutrients, oxygen, and albumins that support the chemical and osmotic balance of nervous tissue. CSF also removes waste products that form as byproducts of cellular metabolism within nervous tissue. All of the bodies’ many sense organs are components of the nervous system. What are known as the special senses—vision, taste, smell, hearing, and balance—are all detected by specialized organs such as the eyes, taste buds, and olfactory epithelium. Sensory receptors for the general senses like touch, temperature, and pain are found throughout most of the body. All of the sensory receptors of the body are connected to afferent neurons that carry their sensory information to the CNS to be processed and integrated. Functions of the Nervous System The nervous system has 3 main functions: sensory, integration, and motor. - Sensory. The sensory function of the nervous system involves collecting information from sensory receptors that monitor the body’s internal and external conditions. These signals are then passed on to the central nervous system (CNS) for further processing by afferent neurons (and nerves). - Integration. The process of integration is the processing of the many sensory signals that are passed into the CNS at any given time. These signals are evaluated, compared, used for decision making, discarded or committed to memory as deemed appropriate. Integration takes place in the gray matter of the brain and spinal cord and is performed by interneurons. Many interneurons work together to form complex networks that provide this processing power. - Motor. Once the networks of interneurons in the CNS evaluate sensory information and decide on an action, they stimulate efferent neurons. Efferent neurons (also called motor neurons) carry signals from the gray matter of the CNS through the nerves of the peripheral nervous system to effector cells. The effector may be smooth, cardiac, or skeletal muscle tissue or glandular tissue. The effector then releases a hormone or moves a part of the body to respond to the stimulus. Central Nervous System The brain and spinal cord together form the central nervous system, or CNS. The CNS acts as the control center of the body by providing its processing, memory, and regulation systems. The CNS takes in all of the conscious and subconscious sensory information from the body’s sensory receptors to stay aware of the body’s internal and external conditions. Using this sensory information, it makes decisions about both conscious and subconscious actions to take to maintain the body’s homeostasis and ensure its survival. The CNS is also responsible for the higher functions of the nervous system such as language, creativity, expression, emotions, and personality. The brain is the seat of consciousness and determines who we are as individuals. Peripheral Nervous System The peripheral nervous system (PNS) includes all of the parts of the nervous system outside of the brain and spinal cord. These parts include all of the cranial and spinal nerves, ganglia, and sensory receptors. Somatic Nervous System The somatic nervous system (SNS) is a division of the PNS that includes all of the voluntary efferent neurons. The SNS is the only consciously controlled part of the PNS and is responsible for stimulating skeletal muscles in the body. Autonomic Nervous System The autonomic nervous system (ANS) is a division of the PNS that includes all of the involuntary efferent neurons. The ANS controls subconscious effectors such as visceral muscle tissue, cardiac muscle tissue, and glandular tissue. There are 2 divisions of the autonomic nervous system in the body: the sympathetic and parasympathetic divisions. - Sympathetic. The sympathetic division forms the body’s “fight or flight” response to stress, danger, excitement, exercise, emotions, and embarrassment. The sympathetic division increases respiration and heart rate, releases adrenaline and other stress hormones, and decreases digestion to cope with these situations. - Parasympathetic. The parasympathetic division forms the body’s “rest and digest” response when the body is relaxed, resting, or feeding. The parasympathetic works to undo the work of the sympathetic division after a stressful situation. Among other functions, the parasympathetic division works to decrease respiration and heart rate, increase digestion, and permit the elimination of wastes. The enteric nervous system (ENS) is the division of the ANS that is responsible for regulating digestion and the function of the digestive organs. The ENS receives signals from the central nervous system through both the sympathetic and parasympathetic divisions of the autonomic nervous system to help regulate its functions. However, the ENS mostly works independently of the CNS and continues to function without any outside input. For this reason, the ENS is often called the “brain of the gut” or the body’s “second brain.” The ENS is an immense system—almost as many neurons exist in the ENS as in the spinal cord. Neurons function through the generation and propagation of electrochemical signals known as action potentials (APs). An AP is created by the movement of sodium and potassium ions through the membrane of neurons. - Resting Potential. At rest, neurons maintain a concentration of sodium ions outside of the cell and potassium ions inside of the cell. This concentration is maintained by the sodium-potassium pump of the cell membrane which pumps 3 sodium ions out of the cell for every 2 potassium ions that are pumped into the cell. The ion concentration results in a resting electrical potential of -70 millivolts (mV), which means that the inside of the cell has a negative charge compared to its surroundings. - Threshold Potential. If a stimulus permits enough positive ions to enter a region of the cell to cause it to reach -55 mV, that region of the cell will open its voltage-gated sodium channels and allow sodium ions to diffuse into the cell. -55 mV is the threshold potential for neurons as this is the “trigger” voltage that they must reach to cross the threshold into forming an action potential. - Depolarization. Sodium carries a positive charge that causes the cell to become depolarized (positively charged) compared to its normal negative charge. The voltage for depolarization of all neurons is +30 mV. The depolarization of the cell is the AP that is transmitted by the neuron as a nerve signal. The positive ions spread into neighboring regions of the cell, initiating a new AP in those regions as they reach -55 mV. The AP continues to spread down the cell membrane of the neuron until it reaches the end of an axon. - Repolarization. After the depolarization voltage of +30 mV is reached, voltage-gated potassium ion channels open, allowing positive potassium ions to diffuse out of the cell. The loss of potassium along with the pumping of sodium ions back out of the cell through the sodium-potassium pump restores the cell to the -55 mV resting potential. At this point the neuron is ready to start a new action potential. A synapse is the junction between a neuron and another cell. Synapses may form between 2 neurons or between a neuron and an effector cell. There are two types of synapses found in the body: chemical synapses and electrical synapses. - Chemical synapses. At the end of a neuron’s axon is an enlarged region of the axon known as the axon terminal. The axon terminal is separated from the next cell by a small gap known as the synaptic cleft. When an AP reaches the axon terminal, it opens voltage-gated calcium ion channels. Calcium ions cause vesicles containing chemicals known as neurotransmitters (NT) to release their contents by exocytosis into the synaptic cleft. The NT molecules cross the synaptic cleft and bind to receptor molecules on the cell, forming a synapse with the neuron. These receptor molecules open ion channels that may either stimulate the receptor cell to form a new action potential or may inhibit the cell from forming an action potential when stimulated by another neuron. - Electrical synapses. Electrical synapses are formed when 2 neurons are connected by small holes called gap junctions. The gap junctions allow electric current to pass from one neuron to the other, so that an AP in one cell is passed directly on to the other cell through the synapse. The axons of many neurons are covered by a coating of insulation known as myelin to increase the speed of nerve conduction throughout the body. Myelin is formed by 2 types of glial cells: Schwann cells in the PNS and oligodendrocytes in the CNS. In both cases, the glial cells wrap their plasma membrane around the axon many times to form a thick covering of lipids. The development of these myelin sheaths is known as myelination. Myelination speeds up the movement of APs in the axon by reducing the number of APs that must form for a signal to reach the end of an axon. The myelination process begins speeding up nerve conduction in fetal development and continues into early adulthood. Myelinated axons appear white due to the presence of lipids and form the white matter of the inner brain and outer spinal cord. White matter is specialized for carrying information quickly through the brain and spinal cord. The gray matter of the brain and spinal cord are the unmyelinated integration centers where information is processed. Reflexes are fast, involuntary responses to stimuli. The most well known reflex is the patellar reflex, which is checked when a physicians taps on a patient’s knee during a physical examination. Reflexes are integrated in the gray matter of the spinal cord or in the brain stem. Reflexes allow the body to respond to stimuli very quickly by sending responses to effectors before the nerve signals reach the conscious parts of the brain. This explains why people will often pull their hands away from a hot object before they realize they are in pain.
When you clean your house you are probably vacuuming up space dust. Not kidding. It is the same dust that was once part of comets and asteroids. You see that dust in the faint glow it helps create before sunrise and after sunset. As much as 40,000 tons of space dust arrives on Earth every year. While that fact may not be in doubt, there is a lot of debate about where this dust comes from. Most of it, we know, spirals down from the interplanetary dust cloud, a vast swathe of dust extending in a disk-shape around the sun. But where exactly did this dust cloud originate? Recent studies suggest that less than 10% of the dust comes from asteroids, but that a much larger portion originates from Jupiter-family comets. These comets, which are made up of ice and dust, orbit around the sun close to Jupiter. They most likely enter the inner solar system because of collisions with other comets in the Kuiper belt, a major comet belt found beyond Neptune. When space dust falls to Earth, depending on its size and abundance, it can produce a meteor shower (shooting stars). In fact, the annual Perseids and Leonids meteor showers are produced by the Earth encountering the dusty debris left behind from comets Swift-Tuttle and Tempel-Tuttle. Comet dust travels at high speeds, sometimes more than 150,000kph. It is slowed by the Earth’s atmosphere, but the pressure created on bigger pieces is enough to cause it to burn up in a flash of light. Smaller particles are the lucky ones. They can deal with the sudden change in pressure when entering Earth’s atmosphere and make it all the way to the surface. NASA regularly uses special ER2 aircraft, a research version of the U2 spy plane, to fly at stratospheric heights (around 20km, twice that of a commercial plane) to collect space dust. The collection technique itself is simple. When at cruising altitude in the stratosphere the pilot opens up some pods below the wing containing “sticky pads”, which collect pieces of space dust. Back on Earth NASA use an exceptionally clean laboratory to pick the space dust from the collectors for researchers, like myself, to study. My research is based around these dust particles because they offer our best opportunity to sample comets. The ER2 is a much cheaper way of obtaining these samples. The other method involves launching a spacecraft to reach out to a comet, and ensuring it can come back after passing through a comet’s icy and dusty tail, or even landing on its surface. There has been only one comet sample return mission to date – NASA’s Stardust. Such missions, despite their expense, provide the most pristine solar system samples we will ever get. The spacecraft acts like a cocoon, protecting the samples on their travel through space, and from the extreme heating effects of entering the Earth’s atmosphere that can otherwise cause irreversible changes to the sample. Read More: Here
16 November 2017 The digital topographical models derived from HRSC stereo image data make it possible to compute perspective views of the Martian landscape. These can often provide a much better visual understanding of geological phenomena and processes than a top view. This image is equivalent to a view from an aircraft that is tilted at an angle, looking down on a ridge some 2000 metres tall and intersected almost vertically by three graben structures: two very obvious graben to the right of the picture, and a smaller, less obvious graben to their left that does not run all the way through the image. Part of this smaller graben is also covered by an enormous fan-shaped area of rock debris emanating from a very obvious escarpment from which a massive landslip has occurred. The central graben also shows a striking furrow-like depression – presumably here the bottom of the graben has slumped into hollows that had formed beneath it. ESA/DLR/FU Berlin, CC BY-SA 3.0 IGO. The more or less parallel fault lines of the Sirenum Fossae clearly show the direction in which tensions in the Martian crust run – geophysicists call this ‘tectonic stress’. The right-hand side of the image is north, and the graben run from northeast to southwest. The forces that stretched the crust, ultimately resulting in its splitting, ran perpendicular to this, i.e. in a southeasterly and northwesterly direction. In this process, the existing three to four billion-year-old plateau was ‘dissected’ in a straight line. The image also shows the result of younger geological processes: for example, asteroid impacts have left craters with barely weathered rims; huge landslides have occurred along the 2000 metre high ridge in the upper third of the image, forming tongue-shaped deposits; and small grooves may possibly be the result of erosion by water running down the slopes. Sirenum Fossae is a graben system caused by tectonics, that is to say movements resulting from tension in the Martian crust. They are up to nine kilometres wide, several hundred metres to one and a half kilometres deep, and are found in the Tharsis region, which is a bulge in the Martian crust the size of Europe. The images in this article were acquired in the area of the small rectangle at the southern end of the filmstrip recorded by the HRSC camera system on board the ESA Mars Express spacecraft during orbit 16688. NASA/JPL/MOLA; FU Berlin. Digital topographical models of the surface of Mars, with an accuracy of up to 10 metres per picture element (pixel), are derived from the nadir channel (oriented perpendicularly to the surface of Mars) and the stereo channels of the HRSC camera operated by DLR. These colour-coded images clearly depict the absolute elevations above a reference level, the areoid (from Ares, the Greek name for Mars). The elevation values can be read based on the colour scale in the top right of the image. The Sirenum Fossae are located on the Tharsis plateau, a region strongly marked by volcanic activity and associated tectonic processes, rising to a height of up to 5000 metres. The image clearly shows how the younger graben, resulting from stretching of the Martian crust, cross older, pre-existing structures. In some places, the graben are up to 1500 metres deep. So-called anaglyph images can be produced from the nadir channel (oriented perpendicular to the surface of Mars) of the HRSC camera system operated by the DLR on board the Mars Express spacecraft and one of the four oblique-view stereo channels. When viewed with red-blue or red-green glasses, these images give a realistic, three-dimensional view of the landscape. North is to the right in the image. Using anaglyph glasses, the topographical proportions in the region can be seen very clearly. A ridge approximately 2000 metres high crosses the image in a north-south direction. It is intersected by several parallel graben. Between each pair of fault lines, the surface has subsided by several hundred metres. Other features that stand out in very marked relief when viewed with 3D glasses include a number of young craters with strikingly sharp rims (which indicate that they are young), some landslides, and some small grooves along the ridge, likewise caused by erosion, possibly even by water runoff. These images, acquired by the High Resolution Stereo Camera (HRSC) operated by the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) on board the ESA Mars Express spacecraft, show a region of Mars strongly shaped by volcanic activity. Traces of this activity include the striking parallel grabens that cross the region – the Sirenum Fossae. Sirenum Fossae is a graben system formed by tectonics – that is to say movements resulting from tension in the Martian crust. They are up to nine kilometres wide, with depths ranging from a few hundred metres to one and a half kilometres, and extend for thousands of kilometres across the surface of Mars. The expansion of the rigid Martian crust resulted in linear rifts and the subsidence of heavy blocks of crust between two fault lines in the ductile subsurface. It is generally assumed that the proliferation of such grabens on Mars is related to magmatic activity and with dikes. These are steep corridors within the rock along which magma from the interior of Mars can propagate upward. When such dikes reach the surface, volcanic eruptions occur along these fissures and cracks, with an outpour of lava. However, if they remain ‘stuck’ underground, they exert pressure on the surface from below – the strength of this pressure being dependent on the volume of the accumulated mass – which in turn results in tension and causes the surface to fracture. Large concentrations of such dikes, known as dike swarms, are very common in volcanic rift zones (large regional stretch zones) on Earth, for example in Iceland, where they can be observed, together with surface fractures and graben sets, in the Krafla fissure swarm. The Tharsis volcanic province – cause of the grabens? It is not yet clear whether Sirenum Fossae was formed by local or regional magmatic events. In case of the latter, they would be part of a huge radial system of dikes whose centre lies further east in the Tharsis region. Tharsis is the largest volcanic region on Mars. It spans an area the size of Europe, and rises like a shield more than four kilometres above the Martian surface. Some of the largest volcanoes in the Solar System can be found here, including Olympus Mons (rising 22 kilometres above the reference level of Mars), and the three Tharsis volcanoes Ascraeus (15 kilometres), Pavonis (eight kilometres) and Arsia (11 kilometres). The Sirenum Fossae are part of a radial fracture pattern around the Arsia Mons volcano in Tharsis, some 1800 kilometres northeast of the region that can be seen in these images. The deformations run right through the area in the images, across all types of terrain. This shows that the deformations are younger than the mountain ridges in the bottom (to the east) half of the image, and also younger than the plateaus in the upper (to the west) half of the image. A small number of impact craters in the graben formed at a later date, however, and if they are found in statistically sufficient numbers along the rest of the graben system, they may be helpful in determining the age of the tectonic structure. The images were acquired by the HRSC (High Resolution Stereo Camera) on 5 March 2017 during Mars Express Orbit 16688. The image resolution is 14 metres per picture element (pixel). The centre of the image is located at approximately 215 degrees east and 28 degrees south. The colour aerial view (image 2) was created from the nadir channel, oriented perpendicular to the surface of Mars, and the colour channels of the HRSC. The perspective angle view (image 1) was computed from the HRSC stereo channels. The anaglyph image (image 5), which conveys a three-dimensional impression of the landscape when viewed with red-blue or red-green glasses, was derived from the nadir channel and one stereo channel. The aerial view (image 4), encoded in rainbow colours, is based on a digital terrain model (DTM) of the region from which the topography of the landscape can be derived. The reference unit for the HRSC-DTM is an equipotential surface of Mars (areoid). Systematic processing of the camera data was carried out at the DLR Institute of Planetary Research. From this data, staff specialising in planetology and remote sensing at the Free University of Berlin produced the views shown here. The High Resolution Stereo Camera was developed at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) and built in collaboration with partners in industry (EADS Astrium, Lewicki Microelectronic GmbH and Jena-Optronik GmbH). The science team, which is headed by principal investigator (PI) Ralf Jaumann, consists of over 40 co-investigators from 33 institutions and 10 countries. The camera is operated by the DLR Institute of Planetary Research in Berlin-Adlershof. The camera has been delivering images of the Red Planet since 2004. Last modified:16/11/2017 10:08:28
Chapter 12: Money Growth and Inflation By the end of this chapter, students should understand: why inflation results from rapid growth in the money supply. the meaning of the classical dichotomy and monetary neutrality. why some countries print so much money that they experience hyperinflation. how the nominal interest rate responds to the inflation rate. the various costs that inflation imposes on society. Word of caution: This chapter is highly theoretical. You must understand the specialized terminology of this chapter such as “real” variables, “nominal variables, the Classical Dichotomy and Money are those measured in monetary units. In other words in current dollars or market These variables are observed. For example, your salary is paid in current dollars and it is When you buy soda or make a payment on your credit card balance, you are conducting these activities in nominal terms. Other examples of nominal variables include the prices of goods, wages, the current market value of GDP and nominal (observed, quoted) interest rate on loans. is denoted in two ways: (1) in physical units e.g. a 2,500 sq. ft home, imports of 50,000 Honda Civics, purchase of 3 tons of tobacco; (2) in real terms using “real” or constant dollars or using relative prices (the price of one good in terms of another). Examples include real wages, real GDP and the real interest rate. What is the meaning of Well, actually when we studied the trade model, this is what we For example, the opportunity cost of 1 loaf of bread = 2 pounds of apples. Suppose the price of bread = $3.00 and the price of apples = $1.50 per pound. Then the relative price of apples to bread = Price of apples / Price of bread = 1.50/3 = ½ For 1 pound of apple give up ½ loaf of bread. Similarly, the relative price of bread to apples = Price of bread/price of apples = 3/1.50 = 2 for I loaf of bread give up 2 pounds of apples. According to the principle of monetary neutrality, only nominal variables are affected by changes in the quantity of money. Think of it like so. If the Fed doubles the money supply, does this mean the US economy can produce twice as much goods and services? No, the quantity of goods and services depends on real variables such as the quantity and quality of resources, technology and institutional factors that affect production and resource use. So, what must happen?….Prices simply double. This is the principle of money neutrality. This principle holds true in the long run but in the short run, it can affect production. (This is studied in much greater detail in chapter 15.) The term “Money Demand” refers to the amount of money that economic agents wish to hold at any given price level. Why do people want to hold money? In this model, money’s most important function is its function as a medium of exchange, that is, money is used to facilitate transactions. Economists
A wildfire or wildland fire is a fire in an area of combustible vegetation that occurs in rural areas. Depending on the type of vegetation where it occurs, a wildfire can also be classified more specifically as a brush fire, bush fire, desert fire, forest fire, grass fire, hill fire, peat fire, vegetation fire, and veld fire. Fossil charcoal indicates that wildfires began soon after the appearance of terrestrial plants 420 million years ago. Wildfire’s occurrence throughout the history of terrestrial life invites conjecture that fire must have had pronounced evolutionary effects on most ecosystems' flora and fauna. Earth is an intrinsically flammable planet owing to its cover of carbon-rich vegetation, seasonally dry climates, atmospheric oxygen, and widespread lightning and volcanic ignitions. Wildfires can be characterized in terms of the cause of ignition, their physical properties, the combustible material present, and the effect of weather on the fire. Wildfires can cause damage to property and human life, but they have many beneficial effects on native vegetation, animals, and ecosystems that have evolved with fire. High-severity wildfire creates complex early seral forest habitat (also called “snag forest habitat”), which often has higher species richness and diversity than unburned old forest. Many plant species depend on the effects of fire for growth and reproduction. However, wildfire in ecosystems where wildfire is uncommon or where non-native vegetation has encroached may have negative ecological effects. Wildfire behaviour and severity result from the combination of factors such as available fuels, physical setting, and weather. Analyses of historical meteorological data and national fire records in western North America show the primacy of climate in driving large regional fires via wet periods that create substantial fuels or drought and warming that extend conducive fire weather. Strategies of wildfire prevention, detection, and suppression have varied over the years. One common and inexpensive technique is controlled burning: permitting or even igniting smaller fires to minimize the amount of flammable material available for a potential wildfire. Vegetation may be burned periodically to maintain high species diversity and frequent burning of surface fuels limits fuel accumulation. Wildland fire use is the cheapest and most ecologically appropriate policy for many forests. Fuels may also be removed by logging, but fuels treatments and thinning have no effect on severe fire behavior when under extreme weather conditions. Wildfire itself is reportedly "the most effective treatment for reducing a fire's rate of spread, fireline intensity, flame length, and heat per unit of area" according to Jan Van Wagtendonk, a biologist at the Yellowstone Field Station. Building codes in fire-prone areas typically require that structures be built of flame-resistant materials and a defensible space be maintained by clearing flammable materials within a prescribed distance from the structure. - 1 Causes - 2 Spread - 3 Physical properties - 4 Effect of weather - 5 Ecology - 6 History - 7 Prevention - 8 Detection - 9 Suppression - 10 Fire retardant - 11 Human risk and exposure - 12 Health effects - 13 See also - 14 References - 15 Bibliography - 16 External links The most common direct human causes of wildfire ignition include arson, discarded cigarettes, power-line arcs (as detected by arc mapping), and sparks from equipment. Ignition of wildland fires via contact with hot rifle-bullet fragments is also possible under the right conditions. Wildfires can also be started in communities experiencing shifting cultivation, where land is cleared quickly and farmed until the soil loses fertility, and slash and burn clearing. Forested areas cleared by logging encourage the dominance of flammable grasses, and abandoned logging roads overgrown by vegetation may act as fire corridors. Annual grassland fires in southern Vietnam stem in part from the destruction of forested areas by US military herbicides, explosives, and mechanical land-clearing and -burning operations during the Vietnam War. The most common cause of wildfires varies throughout the world. In Canada and northwest China, lightning operates as the major source of ignition. In other parts of the world, human involvement is a major contributor. In Africa, Central America, Fiji, Mexico, New Zealand, South America, and Southeast Asia, wildfires can be attributed to human activities such as agriculture, animal husbandry, and land-conversion burning. In China and in the Mediterranean Basin, human carelessness is a major cause of wildfires. In the United States and Australia, the source of wildfires can be traced both to lightning strikes and to human activities (such as machinery sparks, cast-away cigarette butts, or arson). Coal seam fires burn in the thousands around the world, such as those in Burning Mountain, New South Wales; Centralia, Pennsylvania; and several coal-sustained fires in China. They can also flare up unexpectedly and ignite nearby flammable material. The spread of wildfires varies based on the flammable material present, its vertical arrangement and moisture content, and weather conditions. Fuel arrangement and density is governed in part by topography, as land shape determines factors such as available sunlight and water for plant growth. Overall, fire types can be generally characterized by their fuels as follows: - Ground fires are fed by subterranean roots, duff and other buried organic matter. This fuel type is especially susceptible to ignition due to spotting. Ground fires typically burn by smoldering, and can burn slowly for days to months, such as peat fires in Kalimantan and Eastern Sumatra, Indonesia, which resulted from a riceland creation project that unintentionally drained and dried the peat. - Crawling or surface fires are fueled by low-lying vegetation on the forest floor such as leaf and timber litter, debris, grass, and low-lying shrubbery. This kind of fire often burns at a relatively lower temperature than crown fires (less than 400 °C (752 °F)) and may spread at slow rate, though steep slopes and wind can accelerate the rate of spread. - Ladder fires consume material between low-level vegetation and tree canopies, such as small trees, downed logs, and vines. Kudzu, Old World climbing fern, and other invasive plants that scale trees may also encourage ladder fires. - Crown, canopy, or aerial fires burn suspended material at the canopy level, such as tall trees, vines, and mosses. The ignition of a crown fire, termed crowning, is dependent on the density of the suspended material, canopy height, canopy continuity, sufficient surface and ladder fires, vegetation moisture content, and weather conditions during the blaze. Stand-replacing fires lit by humans can spread into the Amazon rain forest, damaging ecosystems not particularly suited for heat or arid conditions. Wildfires occur when all of the necessary elements of a fire triangle come together in a susceptible area: an ignition source is brought into contact with a combustible material such as vegetation, that is subjected to sufficient heat and has an adequate supply of oxygen from the ambient air. A high moisture content usually prevents ignition and slows propagation, because higher temperatures are required to evaporate any water within the material and heat the material to its fire point. Dense forests usually provide more shade, resulting in lower ambient temperatures and greater humidity, and are therefore less susceptible to wildfires. Less dense material such as grasses and leaves are easier to ignite because they contain less water than denser material such as branches and trunks. Plants continuously lose water by evapotranspiration, but water loss is usually balanced by water absorbed from the soil, humidity, or rain. When this balance is not maintained, plants dry out and are therefore more flammable, often a consequence of droughts. A wildfire front is the portion sustaining continuous flaming combustion, where unburned material meets active flames, or the smoldering transition between unburned and burned material. As the front approaches, the fire heats both the surrounding air and woody material through convection and thermal radiation. First, wood is dried as water is vaporized at a temperature of 100 °C (212 °F). Next, the pyrolysis of wood at 230 °C (450 °F) releases flammable gases. Finally, wood can smoulder at 380 °C (720 °F) or, when heated sufficiently, ignite at 590 °C (1,000 °F). Even before the flames of a wildfire arrive at a particular location, heat transfer from the wildfire front warms the air to 800 °C (1,470 °F), which pre-heats and dries flammable materials, causing materials to ignite faster and allowing the fire to spread faster. High-temperature and long-duration surface wildfires may encourage flashover or torching: the drying of tree canopies and their subsequent ignition from below. Wildfires have a rapid forward rate of spread (FROS) when burning through dense, uninterrupted fuels. They can move as fast as 10.8 kilometres per hour (6.7 mph) in forests and 22 kilometres per hour (14 mph) in grasslands. Wildfires can advance tangential to the main front to form a flanking front, or burn in the opposite direction of the main front by backing. They may also spread by jumping or spotting as winds and vertical convection columns carry firebrands (hot wood embers) and other burning materials through the air over roads, rivers, and other barriers that may otherwise act as firebreaks. Torching and fires in tree canopies encourage spotting, and dry ground fuels that surround a wildfire are especially vulnerable to ignition from firebrands. Spotting can create spot fires as hot embers and firebrands ignite fuels downwind from the fire. In Australian bushfires, spot fires are known to occur as far as 20 kilometres (12 mi) from the fire front. Especially large wildfires may affect air currents in their immediate vicinities by the stack effect: air rises as it is heated, and large wildfires create powerful updrafts that will draw in new, cooler air from surrounding areas in thermal columns. Great vertical differences in temperature and humidity encourage pyrocumulus clouds, strong winds, and fire whirls with the force of tornadoes at speeds of more than 80 kilometres per hour (50 mph). Rapid rates of spread, prolific crowning or spotting, the presence of fire whirls, and strong convection columns signify extreme conditions. Heat waves, droughts, cyclical climate changes such as El Niño, and regional weather patterns such as high-pressure ridges can increase the risk and alter the behavior of wildfires dramatically. Years of precipitation followed by warm periods can encourage more widespread fires and longer fire seasons. Since the mid-1980s, earlier snowmelt and associated warming has also been associated with an increase in length and severity of the wildfire season in the Western United States. Global warming may increase the intensity and frequency of droughts in many areas, creating more intense and frequent wildfires. A 2015 study indicates that the increase in fire risk in California may be attributable to human-induced climate change. A study of alluvial sediment deposits going back over 8,000 years found warmer climate periods experienced severe droughts and stand-replacing fires and concluded climate was such a powerful influence on wildfire that trying to recreate presettlement forest structure is likely impossible in a warmer future. Intensity also increases during daytime hours. Burn rates of smoldering logs are up to five times greater during the day due to lower humidity, increased temperatures, and increased wind speeds. Sunlight warms the ground during the day which creates air currents that travel uphill. At night the land cools, creating air currents that travel downhill. Wildfires are fanned by these winds and often follow the air currents over hills and through valleys. Fires in Europe occur frequently during the hours of 12:00 p.m. and 2:00 p.m. Wildfire suppression operations in the United States revolve around a 24-hour fire day that begins at 10:00 a.m. due to the predictable increase in intensity resulting from the daytime warmth. Wildfire’s occurrence throughout the history of terrestrial life invites conjecture that fire must have had pronounced evolutionary effects on most ecosystems' flora and fauna. Wildfires are common in climates that are sufficiently moist to allow the growth of vegetation but feature extended dry, hot periods. Such places include the vegetated areas of Australia and Southeast Asia, the veld in southern Africa, the fynbos in the Western Cape of South Africa, the forested areas of the United States and Canada, and the Mediterranean Basin. High-severity wildfire creates complex early seral forest habitat (also called “snag forest habitat”), which often has higher species richness and diversity than unburned old forest. Plant and animal species in most types of North American forests evolved with fire, and many of these species depend on wildfires, and particularly high-severity fires, to reproduce and grow. Fire helps to return nutrients from plant matter back to soil, the heat from fire is necessary to the germination of certain types of seeds, and the snags (dead trees) and early successional forests created by high-severity fire create habitat conditions that are beneficial to wildlife. Early successional forests created by high-severity fire support some of the highest levels of native biodiversity found in temperate conifer forests. Post-fire logging has no ecological benefits and many negative impacts; the same is often true for post-fire seeding. Although some ecosystems rely on naturally occurring fires to regulate growth, some ecosystems suffer from too much fire, such as the chaparral in southern California and lower-elevation deserts in the American Southwest. The increased fire frequency in these ordinarily fire-dependent areas has upset natural cycles, damaged native plant communities, and encouraged the growth of non-native weeds. Invasive species, such as Lygodium microphyllum and Bromus tectorum, can grow rapidly in areas that were damaged by fires. Because they are highly flammable, they can increase the future risk of fire, creating a positive feedback loop that increases fire frequency and further alters native vegetation communities. In the Amazon Rainforest, drought, logging, cattle ranching practices, and slash-and-burn agriculture damage fire-resistant forests and promote the growth of flammable brush, creating a cycle that encourages more burning. Fires in the rainforest threaten its collection of diverse species and produce large amounts of CO2. Also, fires in the rainforest, along with drought and human involvement, could damage or destroy more than half of the Amazon rainforest by the year 2030. Wildfires generate ash, reduce the availability of organic nutrients, and cause an increase in water runoff, eroding away other nutrients and creating flash flood conditions. A 2003 wildfire in the North Yorkshire Moors burned off 2.5 square kilometers (600 acres) of heather and the underlying peat layers. Afterwards, wind erosion stripped the ash and the exposed soil, revealing archaeological remains dating back to 10,000 BC. Wildfires can also have an effect on climate change, increasing the amount of carbon released into the atmosphere and inhibiting vegetation growth, which affects overall carbon uptake by plants. In tundra there is a natural pattern of accumulation of fuel and wildfire which varies depending on the nature of vegetation and terrain. Research in Alaska has shown fire-event return intervals, (FRIs) that typically vary from 150 to 200 years with dryer lowland areas burning more frequently than wetter upland areas. Plants in wildfire-prone ecosystems often survive through adaptations to their local fire regime. Such adaptations include physical protection against heat, increased growth after a fire event, and flammable materials that encourage fire and may eliminate competition. For example, plants of the genus Eucalyptus contain flammable oils that encourage fire and hard sclerophyll leaves to resist heat and drought, ensuring their dominance over less fire-tolerant species. Dense bark, shedding lower branches, and high water content in external structures may also protect trees from rising temperatures. Fire-resistant seeds and reserve shoots that sprout after a fire encourage species preservation, as embodied by pioneer species. Smoke, charred wood, and heat can stimulate the germination of seeds in a process called serotiny. Exposure to smoke from burning plants promotes germination in other types of plants by inducing the production of the orange butenolide. Grasslands in Western Sabah, Malaysian pine forests, and Indonesian Casuarina forests are believed to have resulted from previous periods of fire. Chamise deadwood litter is low in water content and flammable, and the shrub quickly sprouts after a fire. Cape lilies lie dormant until flames brush away the covering and then blossom almost overnight. Sequoia rely on periodic fires to reduce competition, release seeds from their cones, and clear the soil and canopy for new growth. Caribbean Pine in Bahamian pineyards have adapted to and rely on low-intensity, surface fires for survival and growth. An optimum fire frequency for growth is every 3 to 10 years. Too frequent fires favor herbaceous plants, and infrequent fires favor species typical of Bahamian dry forests. Most of the Earth's weather and air pollution resides in the troposphere, the part of the atmosphere that extends from the surface of the planet to a height of about 10 kilometers (6 mi). The vertical lift of a severe thunderstorm or pyrocumulonimbus can be enhanced in the area of a large wildfire, which can propel smoke, soot, and other particulate matter as high as the lower stratosphere. Previously, prevailing scientific theory held that most particles in the stratosphere came from volcanoes, but smoke and other wildfire emissions have been detected from the lower stratosphere. Pyrocumulus clouds can reach 6,100 meters (20,000 ft) over wildfires. Satellite observation of smoke plumes from wildfires revealed that the plumes could be traced intact for distances exceeding 1,600 kilometers (1,000 mi). Computer-aided models such as CALPUFF may help predict the size and direction of wildfire-generated smoke plumes by using atmospheric dispersion modeling. Wildfires can affect local atmospheric pollution, and release carbon in the form of carbon dioxide. Wildfire emissions contain fine particulate matter which can cause cardiovascular and respiratory problems. Increased fire byproducts in the troposphere can increase ozone concentration beyond safe levels. Forest fires in Indonesia in 1997 were estimated to have released between 0.81 and 2.57 giga tonnes (0.89 and 2.83 billion short tons) of CO2 into the atmosphere, which is between 13%–40% of the annual global carbon dioxide emissions from burning fossil fuels. Atmospheric models suggest that these concentrations of sooty particles could increase absorption of incoming solar radiation during winter months by as much as 15%. In the Welsh Borders, the first evidence of wildfire is rhyniophytoid plant fossils preserved as charcoal, dating to the Silurian period (about ). Smoldering surface fires started to occur sometime before the Early Devonian period . Low atmospheric oxygen during the Middle and Late Devonian was accompanied by a decrease in charcoal abundance. Additional charcoal evidence suggests that fires continued through the Carboniferous period. Later, the overall increase of atmospheric oxygen from 13% in the Late Devonian to 30-31% by the Late Permian was accompanied by a more widespread distribution of wildfires. Later, a decrease in wildfire-related charcoal deposits from the late Permian to the Triassic periods is explained by a decrease in oxygen levels. Wildfires during the Paleozoic and Mesozoic periods followed patterns similar to fires that occur in modern times. Surface fires driven by dry seasons[ clarification needed] are evident in Devonian and Carboniferous progymnosperm forests. Lepidodendron forests dating to the Carboniferous period have charred peaks, evidence of crown fires. In Jurassic gymnosperm forests, there is evidence of high frequency, light surface fires. The increase of fire activity in the late Tertiary is possibly due to the increase of C4-type grasses. As these grasses shifted to more mesic habitats, their high flammability increased fire frequency, promoting grasslands over woodlands. However, fire-prone habitats may have contributed to the prominence of trees such as those of the genera Eucalyptus, Pinus and Sequoia, which have thick bark to withstand fires and employ serotiny. The human use of fire for agricultural and hunting purposes during the Paleolithic and Mesolithic ages altered the preexisting landscapes and fire regimes. Woodlands were gradually replaced by smaller vegetation that facilitated travel, hunting, seed-gathering and planting. In recorded human history, minor allusions to wildfires were mentioned in the Bible and by classical writers such as Homer. However, while ancient Hebrew, Greek, and Roman writers were aware of fires, they were not very interested in the uncultivated lands where wildfires occurred. Wildfires were used in battles throughout human history as early thermal weapons. From the Middle ages, accounts were written of occupational burning as well as customs and laws that governed the use of fire. In Germany, regular burning was documented in 1290 in the Odenwald and in 1344 in the Black Forest. In the 14th century Sardinia, firebreaks were used for wildfire protection. In Spain during the 1550s, sheep husbandry was discouraged in certain provinces by Philip II due to the harmful effects of fires used in transhumance. As early as the 17th century, Native Americans were observed using fire for many purposes including cultivation, signaling, and warfare. Scottish botanist David Douglas noted the native use of fire for tobacco cultivation, to encourage deer into smaller areas for hunting purposes, and to improve foraging for honey and grasshoppers. Charcoal found in sedimentary deposits off the Pacific coast of Central America suggests that more burning occurred in the 50 years before the Spanish colonization of the Americas than after the colonization. In the post-World War II Baltic region, socio-economic changes led more stringent air quality standards and bans on fires that eliminated traditional burning practices. In the mid-19th century, explorers from HMS Beagle observed Australian Aborigines using fire for ground clearing, hunting, and regeneration of plant food in a method later named fire-stick farming. Such careful use of fire has been employed for centuries in the lands protected by Kakadu National Park to encourage biodiversity. Wildfires typically occurred during periods of increased temperature and drought. An increase in fire-related debris flow in alluvial fans of northeastern Yellowstone National Park was linked to the period between AD 1050 and 1200, coinciding with the Medieval Warm Period. However, human influence caused an increase in fire frequency. Dendrochronological fire scar data and charcoal layer data in Finland suggests that, while many fires occurred during severe drought conditions, an increase in the number of fires during 850 BC and 1660 AD can be attributed to human influence. Charcoal evidence from the Americas suggested a general decrease in wildfires between 1 AD and 1750 compared to previous years. However, a period of increased fire frequency between 1750 and 1870 was suggested by charcoal data from North America and Asia, attributed to human population growth and influences such as land clearing practices. This period was followed by an overall decrease in burning in the 20th century, linked to the expansion of agriculture, increased livestock grazing, and fire prevention efforts. A meta-analysis found that 17 times more land burned annually in California before 1800 compared to recent decades (1,800,000 hectares/year compared to 102,000 hectares/year). According to a paper published in Science, the number of natural and human-caused fires decreased by 24.3% between 1998 and 2015. Researchers explain this a transition from nomadism to settled lifestyle and intensification of agriculture that lead to a drop in the use of fire for land clearing. Some invasive species, moved in by humans (i.e., for the pulp and paper industry) have in some cases also increased the intensity of wildfires. Examples include species such as Eucalyptus in California and gamba grass in Australia. Wildfire prevention refers to the preemptive methods aimed at reducing the risk of fires as well as lessening its severity and spread. Prevention techniques aim to manage air quality, maintain ecological balances, protect resources, and to affect future fires. North American firefighting policies permit naturally caused fires to burn to maintain their ecological role, so long as the risks of escape into high-value areas are mitigated. However, prevention policies must consider the role that humans play in wildfires, since, for example, 95% of forest fires in Europe are related to human involvement. Sources of human-caused fire may include arson, accidental ignition, or the uncontrolled use of fire in land-clearing and agriculture such as the slash-and-burn farming in Southeast Asia. In 1937, U.S. President Franklin D. Roosevelt initiated a nationwide fire prevention campaign, highlighting the role of human carelessness in forest fires. Later posters of the program featured Uncle Sam, characters from the Disney movie Bambi, and the official mascot of the U.S. Forest Service, Smokey Bear. Reducing human-caused ignitions may be the most effective means of reducing unwanted wildfire. Alteration of fuels is commonly undertaken when attempting to affect future fire risk and behavior. Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns. Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions. Vegetation may be burned periodically to maintain high species diversity and frequent burning of surface fuels limits fuel accumulation. Wildland fire use is the cheapest and most ecologically appropriate policy for many forests. Fuels may also be removed by logging, but fuels treatments and thinning have no effect on severe fire behavior Wildfire models are often used to predict and compare the benefits of different fuel treatments on future wildfire spread, but their accuracy is low. Wildfire itself is reportedly "the most effective treatment for reducing a fire's rate of spread, fireline intensity, flame length, and heat per unit of area" according to Jan van Wagtendonk, a biologist at the Yellowstone Field Station. Building codes in fire-prone areas typically require that structures be built of flame-resistant materials and a defensible space be maintained by clearing flammable materials within a prescribed distance from the structure. Communities in the Philippines also maintain fire lines 5 to 10 meters (16 to 33 ft) wide between the forest and their village, and patrol these lines during summer months or seasons of dry weather. Continued residential development in fire-prone areas and rebuilding structures destroyed by fires has been met with criticism. The ecological benefits of fire are often overridden by the economic and safety benefits of protecting structures and human life. Fast and effective detection is a key factor in wildfire fighting. Early detection efforts were focused on early response, accurate results in both daytime and nighttime, and the ability to prioritize fire danger. Fire lookout towers were used in the United States in the early 20th century and fires were reported using telephones, carrier pigeons, and heliographs. Aerial and land photography using instant cameras were used in the 1950s until infrared scanning was developed for fire detection in the 1960s. However, information analysis and delivery was often delayed by limitations in communication technology. Early satellite-derived fire analyses were hand-drawn on maps at a remote site and sent via overnight mail to the fire manager. During the Yellowstone fires of 1988, a data station was established in West Yellowstone, permitting the delivery of satellite-based fire information in approximately four hours. Currently, public hotlines, fire lookouts in towers, and ground and aerial patrols can be used as a means of early detection of forest fires. However, accurate human observation may be limited by operator fatigue, time of day, time of year, and geographic location. Electronic systems have gained popularity in recent years as a possible resolution to human operator error. A government report on a recent trial of three automated camera fire detection systems in Australia did, however, conclude "...detection by the camera systems was slower and less reliable than by a trained human observer". These systems may be semi- or fully automated and employ systems based on the risk area and degree of human presence, as suggested by GIS data analyses. An integrated approach of multiple systems can be used to merge satellite data, aerial imagery, and personnel position via Global Positioning System (GPS) into a collective whole for near-realtime use by wireless Incident Command Centers. A small, high risk area that features thick vegetation, a strong human presence, or is close to a critical urban area can be monitored using a local sensor network. Detection systems may include wireless sensor networks that act as automated weather systems: detecting temperature, humidity, and smoke. These may be battery-powered, solar-powered, or tree-rechargeable: able to recharge their battery systems using the small electrical currents in plant material. Larger, medium-risk areas can be monitored by scanning towers that incorporate fixed cameras and sensors to detect smoke or additional factors such as the infrared signature of carbon dioxide produced by fires. Additional capabilities such as night vision, brightness detection, and color change detection may also be incorporated into sensor arrays. Satellite and aerial monitoring through the use of planes, helicopter, or UAVs can provide a wider view and may be sufficient to monitor very large, low risk areas. These more sophisticated systems employ GPS and aircraft-mounted infrared or high-resolution visible cameras to identify and target wildfires. Satellite-mounted sensors such as Envisat's Advanced Along Track Scanning Radiometer and European Remote-Sensing Satellite's Along-Track Scanning Radiometer can measure infrared radiation emitted by fires, identifying hot spots greater than 39 °C (102 °F). The National Oceanic and Atmospheric Administration's Hazard Mapping System combines remote-sensing data from satellite sources such as Geostationary Operational Environmental Satellite (GOES), Moderate-Resolution Imaging Spectroradiometer (MODIS), and Advanced Very High Resolution Radiometer (AVHRR) for detection of fire and smoke plume locations. However, satellite detection is prone to offset errors, anywhere from 2 to 3 kilometers (1 to 2 mi) for MODIS and AVHRR data and up to 12 kilometers (7.5 mi) for GOES data. Satellites in geostationary orbits may become disabled, and satellites in polar orbits are often limited by their short window of observation time. Cloud cover and image resolution and may also limit the effectiveness of satellite imagery. in 2015 a new fire detection tool is in operation at the U.S. Department of Agriculture (USDA) Forest Service (USFS) which uses data from the Suomi National Polar-orbiting Partnership (NPP) satellite to detect smaller fires in more detail than previous space-based products. The high-resolution data is used with a computer model to predict how a fire will change direction based on weather and land conditions. The active fire detection product using data from Suomi NPP's Visible Infrared Imaging Radiometer Suite (VIIRS) increases the resolution of fire observations to 1,230 feet (375 meters). Previous NASA satellite data products available since the early 2000s observed fires at 3,280 foot (1 kilometer) resolution. The data is one of the intelligence tools used by the USFS and Department of Interior agencies across the United States to guide resource allocation and strategic fire management decisions. The enhanced VIIRS fire product enables detection every 12 hours or less of much smaller fires and provides more detail and consistent tracking of fire lines during long duration wildfires – capabilities critical for early warning systems and support of routine mapping of fire progression. Active fire locations are available to users within minutes from the satellite overpass through data processing facilities at the USFS Remote Sensing Applications Center, which uses technologies developed by the NASA Goddard Space Flight Center Direct Readout Laboratory in Greenbelt, Maryland. The model uses data on weather conditions and the land surrounding an active fire to predict 12–18 hours in advance whether a blaze will shift direction. The state of Colorado decided to incorporate the weather-fire model in its firefighting efforts beginning with the 2016 fire season. In 2014, an international campaign was organized in South Africa's Kruger National Park to validate fire detection products including the new VIIRS active fire data. In advance of that campaign, the Meraka Institute of the Council for Scientific and Industrial Research in Pretoria, South Africa, an early adopter of the VIIRS 375m fire product, put it to use during several large wildfires in Kruger. The demand for timely, high-quality fire information has increased in recent years. Wildfires in the United States burn an average of 7 million acres of land each year. For the last 10 years, the USFS and Department of Interior have spent a combined average of about $2–4 billion annually on wildfire suppression. Wildfire suppression depends on the technologies available in the area in which the wildfire occurs. In less developed nations the techniques used can be as simple as throwing sand or beating the fire with sticks or palm fronds. In more advanced nations, the suppression methods vary due to increased technological capacity. Silver iodide can be used to encourage snow fall, while fire retardants and water can be dropped onto fires by unmanned aerial vehicles, planes, and helicopters. Complete fire suppression is no longer an expectation, but the majority of wildfires are often extinguished before they grow out of control. While more than 99% of the 10,000 new wildfires each year are contained, escaped wildfires under extreme weather conditions are difficult to suppress without a change in the weather. Wildfires in Canada and the US burn an average of 54,500 square kilometers (13,000,000 acres) per year. Above all, fighting wildfires can become deadly. A wildfire's burning front may also change direction unexpectedly and jump across fire breaks. Intense heat and smoke can lead to disorientation and loss of appreciation of the direction of the fire, which can make fires particularly dangerous. For example, during the 1949 Mann Gulch fire in Montana, USA, thirteen smokejumpers died when they lost their communication links, became disoriented, and were overtaken by the fire. In the Australian February 2009 Victorian bushfires, at least 173 people died and over 2,029 homes and 3,500 structures were lost when they became engulfed by wildfire. In California, the U.S. Forest Service spends about $200 million per year to suppress 98% of wildfires and up to $1 billion to suppress the other 2% of fires that escape initial attack and become large. Wildland fire fighters face several life-threatening hazards including heat stress, fatigue, smoke and dust, as well as the risk of other injuries such as burns, cuts and scrapes, animal bites, and even rhabdomyolysis. Especially in hot weather condition, fires present the risk of heat stress, which can entail feeling heat, fatigue, weakness, vertigo, headache, or nausea. Heat stress can progress into heat strain, which entails physiological changes such as increased heart rate and core body temperature. This can lead to heat-related illnesses, such as heat rash, cramps, exhaustion or heat stroke. Various factors can contribute to the risks posed by heat stress, including strenuous work, personal risk factors such as age and fitness, dehydration, sleep deprivation, and burdensome personal protective equipment. Rest, cool water, and occasional breaks are crucial to mitigating the effects of heat stress. Smoke, ash, and debris can also pose serious respiratory hazards to wildland fire fighters. The smoke and dust from wildfires can contain gases such as carbon monoxide, sulfur dioxide and formaldehyde, as well as particulates such as ash and silica. To reduce smoke exposure, wildfire fighting crews should, whenever possible, rotate firefighters through areas of heavy smoke, avoid downwind firefighting, use equipment rather than people in holding areas, and minimize mop-up. Camps and command posts should also be located upwind of wildfires. Protective clothing and equipment can also help minimize exposure to smoke and ash. Firefighters are also at risk of cardiac events including strokes and heart attacks. Fire fighters should maintain good physical fitness. Fitness programs, medical screening and examination programs which include stress tests can minimize the risks of firefighting cardiac problems. Other injury hazards wildland fire fighters face include slips, trips and falls, burns, scrapes and cuts from tools and equipment, being struck by trees, vehicles, or other objects, plant hazards such as thorns and poison ivy, snake and animal bites, vehicle crashes, electrocution from power lines or lightning storms, and unstable building structures. Fire retardants are used to slow wildfires by inhibiting combustion. They are aqueous solutions of ammonium phosphates and ammonium sulfates, as well as thickening agents. The decision to apply retardant depends on the magnitude, location and intensity of the wildfire. In certain instances, fire retardant may also be applied as a precautionary fire defense measure. Typical fire retardants contain the same agents as fertilizers. Fire retardant may also affect water quality through leaching, eutrophication, or misapplication. Fire retardant's effects on drinking water remain inconclusive. Dilution factors, including water body size, rainfall, and water flow rates lessen the concentration and potency of fire retardant. Wildfire debris (ash and sediment) clog rivers and reservoirs increasing the risk for floods and erosion that ultimately slow and/or damage water treatment systems. There is continued concern of fire retardant effects on land, water, wildlife habitats, and watershed quality, additional research is needed. However, on the positive side, fire retardant (specifically its nitrogen and phosphorus components) has been shown to have a fertilizing effect on nutrient-deprived soils and thus creates a temporary increase in vegetation. Current USDA procedure maintains that the aerial application of fire retardant in the United States must clear waterways by a minimum of 300 feet in order to safeguard effects of retardant runoff. Aerial uses of fire retardant are required to avoid application near waterways and endangered species (plant and animal habitats). After any incident of fire retardant misapplication, the U.S. Forest Service requires reporting and assessment impacts be made in order to determine mitigation, remediation, and/or restrictions on future retardant uses in that area. Wildfire modeling is concerned with numerical simulation of wildfires in order to comprehend and predict fire behavior. Wildfire modeling aims to aid wildfire suppression, increase the safety of firefighters and the public, and minimize damage. Using computational science, wildfire modeling involves the statistical analysis of past fire events to predict spotting risks and front behavior. Various wildfire propagation models have been proposed in the past, including simple ellipses and egg- and fan-shaped models. Early attempts to determine wildfire behavior assumed terrain and vegetation uniformity. However, the exact behavior of a wildfire's front is dependent on a variety of factors, including windspeed and slope steepness. Modern growth models utilize a combination of past ellipsoidal descriptions and Huygens' Principle to simulate fire growth as a continuously expanding polygon. Extreme value theory may also be used to predict the size of large wildfires. However, large fires that exceed suppression capabilities are often regarded as statistical outliers in standard analyses, even though fire policies are more influenced by large wildfires than by small fires. Wildfire risk is the chance that a wildfire will start in or reach a particular area and the potential loss of human values if it does. Risk is dependent on variable factors such as human activities, weather patterns, availability of wildfire fuels, and the availability or lack of resources to suppress a fire. Wildfires have continually been a threat to human populations. However, human induced geographical and climatic changes are exposing populations more frequently to wildfires and increasing wildfire risk. It is speculated that the increase in wildfires arises from a century of wildfire suppression coupled with the rapid expansion of human developments into fire-prone wildlands. Wildfires are naturally occurring events that aid in promoting forest health. Global warming and climate changes are causing an increase in temperatures and more droughts nationwide which contributes to an increase in wildfire risk. The most noticeable adverse effect of wildfires is the destruction of property. However, the release of hazardous chemicals from the burning of wildland fuels also significantly impacts health in humans. Wildfire smoke is composed primarily of carbon dioxide and water vapor. Other common smoke components present in lower concentrations are carbon monoxide, formaldehyde, acrolein, polyaromatic hydrocarbons, and benzene. Small particulates suspended in air which come in solid form or in liquid droplets are also present in smoke. 80 -90% of wildfire smoke, by mass, is within the fine particle size class of 2.5 micrometers in diameter or smaller. Despite carbon dioxide's high concentration in smoke, it poses a low health risk due to its low toxicity. Rather, carbon monoxide and fine particulate matter, particularly 2.5 µm in diameter and smaller, have been identified as the major health threats. Other chemicals are considered to be significant hazards but are found in concentrations that are too low to cause detectable health effects. The degree of wildfire smoke exposure to an individual is dependent on the length, severity, duration, and proximity of the fire. People are exposed directly to smoke via the respiratory tract though inhalation of air pollutants. Indirectly, communities are exposed to wildfire debris that can contaminate soil and water supplies. The U.S. Environmental Protection Agency (EPA) developed the Air Quality Index (AQI), a public resource that provides national air quality standard concentrations for common air pollutants. The public can use this index as a tool to determine their exposure to hazardous air pollutants based on visibility range. After a wildfire, hazards remain. Residents returning to their homes may be at risk from falling fire-weakened trees. Humans and pets may also be harmed by falling into ash pits. Firefighters are at the greatest risk for acute and chronic health effects resulting from wildfire smoke exposure. Due to firefighters' occupational duties, they are frequently exposed to hazardous chemicals at a close proximity for longer periods of time. A case study on the exposure of wildfire smoke among wildland firefighters shows that firefighters are exposed to significant levels of carbon monoxide and respiratory irritants above OSHA-permissible exposure limits (PEL) and ACGIH threshold limit values (TLV). 5–10% are overexposed. The study obtained exposure concentrations for one wildland firefighter over a 10-hour shift spent holding down a fireline. The firefighter was exposed to a wide range of carbon monoxide and respiratory irritant (combination of particulate matter 3.5 µm and smaller, acrolein, and formaldehype) levels. Carbon monoxide levels reached up to 160ppm and the TLV irritant index value reached a high of 10. In contrast, the OSHA PEL for carbon monoxide is 30ppm and for the TLV respiratory irritant index, the calculated threshold limit value is 1; any value above 1 exceeds exposure limits. Between 2001 and 2012, over 200 fatalities occurred among wildland firefighters. In addition to heat and chemical hazards, firefighters are also at risk for electrocution from power lines; injuries from equipment; slips, trips, and falls; injuries from vehicle rollovers; heat-related illness; insect bites and stings; stress; and rhabdomyolysis. Residents in communities surrounding wildfires are exposed to lower concentrations of chemicals, but they are at a greater risk for indirect exposure through water or soil contamination. Exposure to residents is greatly dependent on individual susceptibility. Vulnerable persons such as children (ages 0–4), the elderly (ages 65 and older), smokers, and pregnant women are at an increased risk due to their already compromised body systems, even when the exposures are present at low chemical concentrations and for relatively short exposure periods. Additionally, there is evidence of an increase in maternal stress, as documented by researchers M.H. O'Donnell and A.M. Behie, thus affecting birth outcomes. In Australia, studies show that male infants born with drastically higher average birth weights were born in mostly severely fire-affected areas. This is attributed to the fact that maternal signals directly affect fetal growth patterns. Wildfire smoke contains particulate matter that may have adverse effects upon the human respiratory system. Evidence of the health effects of wildfire smoke should be relayed to the public so that exposure may be limited. Evidence of health effects can also be used to influence policy to promote positive health outcomes. Inhalation of smoke from a wildfire can be a health hazard. Wildfire smoke is composed of combustion products i.e. carbon dioxide, carbon monoxide, water vapor, particulate matter, organic chemicals, nitrogen oxides and other compounds. The principal health concern is the inhalation of particulate matter and carbon monoxide. Particulate matter (PM) is a type of air pollution made up of particles of dust and liquid droplets. They are characterized into three categories based on the diameter of the particle: coarse PM, fine PM, and ultrafine PM. Coarse particles are between 2.5 micrometers and 10 micrometers, fine particles measure 0.1 to 2.5 micrometers, and ultrafine particle are less than 0.1 micrometer. Each size can enter the body through inhalation, but the PM impact on the body varies by size. Coarse particles are filtered by the upper airways and these particles can accumulate and cause pulmonary inflammation. This can result in eye and sinus irritation as well as sore throat and coughing. Coarse PM is often composed of materials that are heavier and more toxic that lead to short-term effects with stronger impact. Smaller particulate moves further into the respiratory system creating issues deep into the lungs and the bloodstream. In asthma patients, PM2.5 causes inflammation but also increases oxidative stress in the epithelial cells. These particulates also cause apoptosis and autophagy in lung epithelial cells. Both processes cause the cells to be damaged and impacts the cell function. This damage impacts those with respiratory conditions such as asthma where the lung tissues and function are already compromised. The third PM type is ultra-fine PM (UFP). UFP can enter the bloodstream like PM2.5 however studies show that it works into the blood much quicker. The inflammation and epithelial damage done by UFP has also shown to be much more severe. PM2.5 is of the largest concern in regards to wildfire. This is particularly hazardous to the very young, elderly and those with chronic conditions such as asthma, chronic obstructive pulmonary disease (COPD), cystic fibrosis and cardiovascular conditions. The illnesses most commonly with exposure to fine particle from wildfire smoke are bronchitis, exacerbation of asthma or COPD, and pneumonia. Symptoms of these complications include wheezing and shortness of breath and cardiovascular symptoms include chest pain, rapid heart rate and fatigue. Smoke from wildfires can cause health problems, especially for children and those who already have respiratory problems. Several epidemiological studies have demonstrated a close association between air pollution and respiratory allergic diseases such as bronchial asthma. An observational study of smoke exposure related to the 2007 San Diego wildfires revealed an increase both in healthcare utilization and respiratory diagnoses, especially asthma among the group sampled. Projected climate scenarios of wildfire occurrences predict significant increases in respiratory conditions among young children. Particulate Matter (PM) triggers a series of biological processes including inflammatory immune response, oxidative stress, which are associated with harmful changes in allergic respiratory diseases. Although some studies demonstrated no significant acute changes in lung function among people with asthma related to PM from wildfires, a possible explanation for these counterintuitive findings is increased use of quick-relief medications in response to elevated levels of smoke among those already diagnosed with asthma. In investigating the association of medication use for obstructive lung disease and wildfire exposure, researchers found increases both in the usage of reliever medication and initiation of long-term control. More specifically, some people with asthma reported higher use of quick-relief medications. After two major wildfires in California, Researchers found an increase in physician prescriptions for quick-relief medications in years following the wildfires than compared to the year before each occurrence. There is consistent evidence between wildfire smoke and the exacerbation of asthma. Carbon monoxide (CO) is a colorless, odorless gas that can be found at the highest concentration at close proximity to a smoldering fire. For this reason, carbon monoxide inhalation is a serious threat to the health of wildfire firefighters. CO in smoke can be inhaled into the lungs where it is absorbed into the bloodstream and reduces oxygen delivery to the body's vital organs. At high concentrations, it can cause headache, weakness, dizziness, confusion, nausea, disorientation, visual impairment, coma and even death. However, even at lower concentrations, such as those found at wildfires, individuals with cardiovascular disease may experience chest pain and cardiac arrhythmia. A recent study tracking the number and cause of wildfire firefighter deaths from 1990–2006 found that 21.9% of the deaths occurred from heart attacks. Another important and somewhat less obvious health effect of wildfires is psychiatric diseases and disorders. Both adults and children from countries ranging from the United States and Canada to Greece and Australia who were directly and indirectly affected by wildfires were found by researchers to demonstrate several different mental conditions linked to their experience with the wildfires. These include post-traumatic stress disorder (PTSD), depression, anxiety, and phobias. In a new twist to wildfire health effects, former uranium mining sites were burned over in the summer of 2012 near North Fork, Idaho. This prompted concern from area residents and Idaho State Department of Environmental Quality officials over the potential spread of radiation in the resultant smoke, since those sites had never been completely cleaned up from radioactive remains. The western US has seen an increase in both frequency and intensity of wildfires over the last several decades. This increase has been attributed to the arid climate of the western US and the effects of global warming. An estimated 46 million people were exposed to wildfire smoke from 2004 to 2009 in the Western United States. Evidence has demonstrated that wildfire smoke can increase levels of particulate matter in the atmosphere. The EPA has defined acceptable concentrations of particulate matter in the air, through the National Ambient Air Quality Standards and monitoring of ambient air quality has been mandated. Due to these monitoring programs and the incidence of several large wildfires near populated areas, epidemiological studies have been conducted and demonstrate an association between human health effects and an increase in fine particulate matter due to wildfire smoke. The EPA has defined acceptable concentrations of particulate matter in the air. The National Ambient Air Quality Standards are part of the Clean Air Act and provide mandated guidelines for pollutant levels and the monitoring of ambient air quality. In addition to these monitoring programs, the increased incidence of wildfires near populated areas have precipitated several epidemiological studies. Such studies have demonstrated an association between negative human health effects and an increase in fine particulate matter due to wildfire smoke. The size of the particulate matter is significant as smaller particulate matter (fine) is easily inhaled into the human respiratory tract. Often, small particulate matter can be inhaled into deep lung tissue causing respiratory distress, illness, or disease. An increase in PM smoke emitted from the Hayman fire in Colorado in June 2002, was associated with an increase in respiratory symptoms in patients with COPD. Looking at the wildfires in Southern California in October 2003 in a similar manner, investigators have shown an increase in hospital admissions due to asthma symptoms while being exposed to peak concentrations of PM in smoke. Another epidemiological study found a 7.2% (95% confidence interval: 0.25%, 15%) increase in risk of respiratory related hospital admissions during smoke wave days with high wildfire-specific particulate matter 2.5 compared to matched non-smoke-wave days. Children participating in the Children's Health Study were also found to have an increase in eye and respiratory symptoms, medication use and physician visits. Recently, it was demonstrated that mothers who were pregnant during the fires gave birth to babies with a slightly reduced average birth weight compared to those who were not exposed to wildfire during birth. Suggesting that pregnant women may also be at greater risk to adverse effects from wildfire. Worldwide it is estimated that 339,000 people die due to the effects of wildfire smoke each year. While the size of particulate matter is an important consideration for health effects, the chemical composition of particulate matter (PM2.5) from wildfire smoke should also be considered. Antecedent studies have demonstrated that the chemical composition of PM2.5 from wildfire smoke can yield different estimates of human health outcomes as compared to other sources of smoke. Health outcomes for people exposed to wildfire smoke may differ from those exposed to smoke from alternative sources such as solid fuels. - List of wildfires - Dry thunderstorm - Fire-adapted communities - Fire ecology - Floods and landslides after wildfires - Forest fire weather index - Remote Automated Weather Station - Smoke inhalation - Weather forecasting - Women in firefighting - Cambridge Advanced Learner's Dictionary (Third ed.). Cambridge University Press. 2008. ISBN 978-0-521-85804-5. Archived from the original on 13 August 2009. - "BBC Earth – Forest fire videos – See how fire started on Earth". Archived from the original on 16 October 2015. Retrieved 13 February 2016. - Scott, Andrew C.; Glasspool, Ian J. (18 July 2006). "The diversification of Paleozoic fire systems and fluctuations in atmospheric oxygen concentration". Proceedings of the National Academy of Sciences. 103 (29): 10861–10865. Bibcode: 2006PNAS..10310861S. doi: 10.1073/pnas.0604090103. ISSN 0027-8424. PMC 1544139. PMID 16832054. Archived from the original on 20 April 2015. - Bowman, David M. J. S.; Balch, Jennifer K.; Artaxo, Paulo; Bond, William J.; Carlson, Jean M.; Cochrane, Mark A.; D’Antonio, Carla M.; DeFries, Ruth S.; Doyle, John C. (24 April 2009). "Fire in the Earth System". Science. 324 (5926): 481–484. Bibcode: 2009Sci...324..481B. doi: 10.1126/science.1163886. ISSN 0036-8075. PMID 19390038. Archived from the original on 6 July 2016. - Flannigan, M.D.; B.D. Amiro; K.A. Logan; B.J. Stocks & B.M. Wotton (2005). "Forest Fires and Climate Change in the 21st century" (PDF). Mitigation and Adaptation Strategies for Global Change. 11 (4): 847–859. doi: 10.1007/s11027-005-9020-7. Archived from the original (PDF) on 25 March 2009. Retrieved 26 June 2009. - "The Ecological Importance of Mixed-Severity Fires – ScienceDirect". www.sciencedirect.com. Archived from the original on 1 January 2017. Retrieved 22 August 2016. - Hutto, Richard L. (1 December 2008). "The Ecological Importance of Severe Wildfires: Some Like It Hot". Ecological Applications. 18 (8): 1827–1834. doi: 10.1890/08-0895.1. ISSN 1939-5582. - Stephen J. Pyne. "How Plants Use Fire (And Are Used By It)". NOVA online. Archived from the original on 8 August 2009. Retrieved 30 June 2009. - Graham, et al., 12, 36 - National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, 4–6. - "National Wildfire Coordinating Group Fireline Handbook, Appendix B: Fire Behavior" (PDF). National Wildfire Coordinating Group. April 2006. Archived (PDF) from the original on 17 December 2008. Retrieved 11 December 2008. - Westerling, A. L.; Hidalgo, H. G.; Cayan, D. R.; Swetnam, T. W. (18 August 2006). "Warming and Earlier Spring Increase Western U.S. Forest Wildfire Activity". Science. 313 (5789): 940–943. Bibcode: 2006Sci...313..940W. doi: 10.1126/science.1128834. ISSN 0036-8075. PMID 16825536. Archived from the original on 30 August 2016. - "International Experts Study Ways to Fight Wildfires". Voice of America (VOA) News. 24 June 2009. Archived from the original on 7 January 2010. Retrieved 9 July 2009. - Interagency Strategy for the Implementation of the Federal Wildland Fire Policy, entire text - National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, entire text - Fire. The Australian Experience, 5–6. - Graham, et al., 15. - Noss, Reed F.; Franklin, Jerry F.; Baker, William L.; Schoennagel, Tania; Moyle, Peter B. (2006-11-01). "Managing fire-prone forests in the western United States". Frontiers in Ecology and the Environment. 4 (9): 481–487. doi: 10.1890/1540-9295(2006)4[481:MFFITW]2.0.CO;2. ISSN 1540-9309. - Lydersen, Jamie M.; North, Malcolm P.; Collins, Brandon M. (2014-09-15). "Severity of an uncharacteristically large wildfire, the Rim Fire, in forests with relatively restored frequent fire regimes". Forest Ecology and Management. 328: 326–334. doi: 10.1016/j.foreco.2014.06.005. - van Wagtendonk (1996), 1164 - "California's Fire Hazard Severity Zone Update and Building Standards Revision" (PDF). CAL FIRE. May 2007. Archived (PDF) from the original on 26 February 2009. Retrieved 18 December 2008. - "California Senate Bill No. 1595, Chapter 366" (PDF). State of California. 27 September 2008. Archived (PDF) from the original on 30 March 2012. Retrieved 18 December 2008. - "Wildfire Prevention Strategies" (PDF). National Wildfire Coordinating Group. March 1998. p. 17. Archived (PDF) from the original on 9 December 2008. Retrieved 3 December 2008. - Scott, A (2000). "The Pre-Quaternary history of fire". Palaeogeography, Palaeoclimatology, Palaeoecology. 164 (1–4): 281–329. Bibcode: 2000PPP...164..281S. doi: 10.1016/S0031-0182(00)00192-9. - Pyne, Stephen J.; Andrews, Patricia L.; Laven, Richard D. (1996). Introduction to wildland fire (2nd ed.). John Wiley and Sons. p. 65. ISBN 978-0-471-54913-0. Retrieved 26 January 2010. - "News 8 Investigation: SDG&E Could Be Liable For Power Line Wildfires". UCAN News. 5 November 2007. Archived from the original on 13 August 2009. Retrieved 20 July 2009. - Finney, Mark A.; Maynard, Trevor B.; McAllister, Sara S.; Grob, Ian J. (2013). A Study of Ignition by Rifle Bullets. Fort Collins, CO: United States Forest Service. Retrieved 15 June 2014. - The Associated Press (16 November 2006). "Orangutans in losing battle with slash-and-burn Indonesian farmers". TheStar online. Archived from the original on 13 August 2009. Retrieved 1 December 2008. - Karki, 4. - Liu, Zhihua; Yang, Jian; Chang, Yu; Weisberg, Peter J.; He, Hong S. (June 2012). "Spatial patterns and drivers of fire occurrence and its future trend under climate change in a boreal forest of Northeast China". Global Change Biology. 18 (6): 2041–2056. Bibcode: 2012GCBio..18.2041L. doi: 10.1111/j.1365-2486.2012.02649.x. ISSN 1354-1013. - de Rigo, Daniele; Libertà, Giorgio; Houston Durrant, Tracy; Artés Vivancos, Tomàs; San-Miguel-Ayanz, Jesús (2017). Forest fire danger extremes in Europe under climate change: variability and uncertainty. Luxembourg: Publication Office of the European Union. p. 71. doi: 10.2760/13180. ISBN 978-92-79-77046-3. - Krock, Lexi (June 2002). "The World on Fire". NOVA online – Public Broadcasting System (PBS). Archived from the original on 27 October 2009. Retrieved 13 July 2009. - Balch, Jennifer K.; Bradley, Bethany A.; Abatzoglou, John T.; Nagy, R. Chelsea; Fusco, Emily J.; Mahood, Adam L. (2017). "Human-started wildfires expand the fire niche across the United States". Proceedings of the National Academy of Sciences. 114 (11): 2946–2951. Bibcode: 2017PNAS..114.2946B. doi: 10.1073/pnas.1617394114. ISSN 1091-6490. PMC 5358354. PMID 28242690. - Krajick, Kevin (May 2005). "Fire in the hole". Smithsonian Magazine. Retrieved 30 July 2009. - Graham, et al., iv. - Graham, et al., 9, 13 - Rincon, Paul (9 March 2005). "Asian peat fires add to warming". British Broadcasting Corporation (BBC) News. Archived from the original on 19 December 2008. Retrieved 9 December 2008. - Graham, et al ., iv, 10, 14 - C., Scott, Andrew (2014-01-28). Fire on earth : an introduction. Bowman, D. M. J. S., Bond, William J., 1948-, Pyne, Stephen J., 1949-, Alexander, Martin E. Chichester, West Sussex. ISBN 9781119953579. OCLC 854761793. - "Global Fire Initiative: Fire and Invasives". The Nature Conservancy. Archived from the original on 12 April 2009. Retrieved 3 December 2008. - Graham, et al., iv, 8, 11, 15. - Butler, Rhett (19 June 2008). "Global Commodities Boom Fuels New Assault on Amazon". Yale School of Forestry & Environmental Studies. Archived from the original on 11 April 2009. Retrieved 9 July 2009. - "The Science of Wildland fire". National Interagency Fire Center. Archived from the original on 5 November 2008. Retrieved 21 November 2008. - Graham, et al., 12. - National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, 3. - "Ashes cover areas hit by Southern Calif. fires". MSNBC. Associated Press. 15 November 2008. Archived from the original on 9 December 2008. Retrieved 4 December 2008. - "Influence of Forest Structure on Wildfire Behavior and the Severity of Its Effects" (PDF). US Forest Service. November 2003. Archived (PDF) from the original on 17 December 2008. Retrieved 19 November 2008. - "Prepare for a Wildfire". Federal Emergency Management Agency (FEMA). Archived from the original on 29 October 2008. Retrieved 1 December 2008. - Glossary of Wildland Fire Terminology, 74. - de Sousa Costa and Sandberg, 229–230. - "Archimedes Death Ray: Idea Feasibility Testing". Massachusetts Institute of Technology (MIT). October 2005. Archived from the original on 7 February 2009. Retrieved 1 February 2009. - "Satellites are tracing Europe's forest fire scars". European Space Agency. 27 July 2004. Archived from the original on 10 November 2008. Retrieved 12 January 2009. - Graham, et al., 10–11. - "Protecting Your Home From Wildfire Damage" (PDF). Florida Alliance for Safe Homes (FLASH). p. 5. Archived (PDF) from the original on 19 July 2011. Retrieved 3 March 2010. - Billing, 5–6 - Graham, et al., 12 - Shea, Neil (July 2008). "Under Fire". National Geographic. Archived from the original on 15 February 2009. Retrieved 8 December 2008. - Graham, et al., 16. - Graham, et al., 9, 16. - Volume 1: The Kilmore East Fire. 2009 Victorian Bushfires Royal Commission. Victorian Bushfires Royal Commission, Australia. July 2010. ISBN 978-0-9807408-2-0. Archived from the original on 29 October 2013. Retrieved 26 October 2013. - National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, 4. - Graham, et al., 16–17. - Olson, et al., 2 - "The New Generation Fire Shelter" (PDF). National Wildfire Coordinating Group. March 2003. p. 19. Archived (PDF) from the original on 16 January 2009. Retrieved 16 January 2009. - Glossary of Wildland Fire Terminology, 69. - "Chronological List of U.S. Billion Dollar Events". National Oceanic and Atmospheric Administration (NOAA) Satellite and Information Service. Archived from the original on 15 September 2001. Retrieved 4 February 2009. - McKenzie, et al., 893 - Graham, et al., 2 - Westerling, Al; Hidalgo, Hg; Cayan, Dr; Swetnam, Tw (August 2006). "Warming and earlier spring increase western U.S. Forest wildfire activity". Science. 313 (5789): 940–3. Bibcode: 2006Sci...313..940W. doi: 10.1126/science.1128834. ISSN 0036-8075. PMID 16825536. Archived from the original on 30 July 2009. - Bill Gabbert (9 November 2015). "Was the 2014 wildfire season in California affected by climate change?". Wildfire Today. Archived from the original on 14 May 2016. Retrieved 17 May 2016. - Yoon; et al. (2015). "Extreme Fire Season in California: A Glimpse Into the Future?". Bulletin of the American Meteorological Society. 96 (11): S5–S9. Bibcode: 2015BAMS...96S...5Y. doi: 10.1175/BAMS-D-15-00114.1. Archived from the original on 1 February 2016. - Pierce, Jennifer L.; Meyer, Grant A.; Timothy Jull, A. J. (4 November 2004). "Fire-induced erosion and millennial-scale climate change in northern ponderosa pine forests". Nature. 432 (7013): 87–90. Bibcode: 2004Natur.432...87P. doi: 10.1038/nature03058. ISSN 0028-0836. PMID 15525985. Archived from the original on 1 February 2017. - de Souza Costa and Sandberg, 228 - National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, 5. - San-Miguel-Ayanz, et al., 364. - Glossary of Wildland Fire Terminology, 73. - Donato, Daniel C.; Fontaine, Joseph B.; Robinson, W. Douglas; Kauffman, J. Boone; Law, Beverly E. (1 January 2009). "Vegetation response to a short interval between high-severity wildfires in a mixed-evergreen forest". Journal of Ecology. 97 (1): 142–154. doi: 10.1111/j.1365-2745.2008.01456.x. ISSN 1365-2745. - Interagency Strategy for the Implementation of the Federal Wildland Fire Policy, 3, 37. - Graham, et al., 3. - Keeley, J.E. (1995). "Future of California floristics and systematics: wildfire threats to the California flora" (PDF). Madrono. 42: 175–179. Archived (PDF) from the original on 7 May 2009. Retrieved 26 June 2009. - Zedler, P.H. (1995). "Fire frequency in southern California shrublands: biological effects and management options". In Keeley, J.E.; Scott, T. Brushfires in California wildlands: ecology and resource management. Fairfield, WA: International Association of Wildland Fire. pp. 101–112. - van Wagtendonk (2007), 14. - Nepstad, 4, 8–11 - Lindsey, Rebecca (5 March 2008). "Amazon fires on the rise". Earth Observatory (NASA). Archived from the original on 13 August 2009. Retrieved 9 July 2009. - Nepstad, 4 - "Bushfire and Catchments: Effects of Fire on Soils and Erosion". eWater Cooperative Research Center's. Retrieved 8 January 2009. - Refern, Neil; Vyner, Blaise. "Fylingdales Moor a lost landscape rises from the ashes". Current Archaeology. XIX (226): 20–27. ISSN 0011-3212. - Running, S.W. (2008). "Ecosystem Disturbance, Carbon and Climate". Science. 321 (5889): 652–653. doi: 10.1126/science.1159607. PMID 18669853. - Higuera, Philip E.; Chipman, Melissa L.; Barnes, Jennifer L.; Urban, Michael A.; Hu, Feng Sheng (2011). "Variability of tundra fire regimes in Arctic Alaska: Millennial-scale patterns and ecological implications". Ecological Applications. 21 (8): 3211–3226. doi: 10.1890/11-0387.1. - Santos, Robert L. (1997). "Section Three: Problems, Cares, Economics, and Species". The Eucalyptus of California. California State University. Archived from the original on 2 June 2010. Retrieved 26 June 2009. - Fire. The Australian Experience, 5. - Keeley, J.E. & C.J. Fotheringham (1997). "Trace gas emission in smoke-induced germination" (PDF). Science. 276 (5316): 1248–1250. CiteSeerX 10.1.1.3.2708. doi: 10.1126/science.276.5316.1248. Archived (PDF) from the original on 6 May 2009. Retrieved 26 June 2009. - Flematti GR; Ghisalberti EL; Dixon KW; Trengove RD (2004). "A compound from smoke that promotes seed germination". Science. 305 (5686): 977. doi: 10.1126/science.1099944. PMID 15247439. - Karki, 3. - Pyne, Stephen. "How Plants Use Fire (And How They Are Used By It)". Nova. Archived from the original on 12 September 2013. Retrieved 26 September 2013. - "Giant Sequoias and Fire". US National Park Service. Archived from the original on 28 April 2007. Retrieved 30 June 2009. - "Fire Management Assessment of the Caribbean Pine (Pinus caribea) Forest Ecosystems on Andros and Abaco Islands, Bahamas" (PDF). TNC Global Fire Initiative. The Nature Conservancy. September 2004. Archived (PDF) from the original on 1 December 2008. Retrieved 27 August 2009. - Wang, P.K. (2003). The physical mechanism of injecting biomass burning materials into the stratosphere during fire-induced thunderstorms. San Francisco, California: American Geophysical Union fall meeting. - Fromm, M.; Stocks, B.; Servranckx, R.; Lindsey, D. Smoke in the Stratosphere: What Wildfires have Taught Us About Nuclear Winter; abstract #U14A-04. American Geophysical Union, Fall Meeting 2006. Bibcode: 2006AGUFM.U14A..04F. - Graham, et al., 17 - John R. Scala; et al. "Meteorological Conditions Associated with the Rapid Transport of Canadian Wildfire Products into the Northeast during 5–8 July 2002" (PDF). American Meteorological Society. Archived (PDF) from the original on 26 February 2009. Retrieved 4 February 2009. - Breyfogle, Steve; Sue A., Ferguson (December 1996). "User Assessment of Smoke-Dispersion Models for Wildland Biomass Burning" (PDF). US Forest Service. Archived (PDF) from the original on 26 February 2009. Retrieved 6 February 2009. - Bravo, A.H.; E. R. Sosa; A. P. Sánchez; P. M. Jaimes & R. M. I. Saavedra (2002). "Impact of wildfires on the air quality of Mexico City, 1992–1999". Environmental Pollution. 117 (2): 243–253. doi: 10.1016/S0269-7491(01)00277-9. PMID 11924549. - Dore, S.; Kolb, T. E.; Montes-Helu, M.; Eckert, S. E.; Sullivan, B. W.; Hungate, B. A.; Kaye, J. P.; Hart, S. C.; Koch, G. W. (1 April 2010). "Carbon and water fluxes from ponderosa pine forests disturbed by wildfire and thinning". Ecological Applications. 20 (3): 663–683. doi: 10.1890/09-0934.1. ISSN 1939-5582. - Douglass, R. (2008). "Quantification of the health impacts associated with fine particulate matter due to wildfires. MS Thesis" (PDF). Nicholas School of the Environment and Earth Sciences of Duke University. Archived (PDF) from the original on 10 June 2010. - National Center for Atmospheric Research (13 October 2008). "Wildfires Cause Ozone Pollution to Violate Health Standards". Geophysical Research Letters. Archived from the original on 27 September 2011. Retrieved 4 February 2009. - Page, Susan E.; Florian Siegert; John O. Rieley; Hans-Dieter V. Boehm; Adi Jaya & Suwido Limin (11 July 2002). "The amount of carbon released from peat and forest fires in Indonesia during 1997". Nature. 420 (6911): 61–65. Bibcode: 2002Natur.420...61P. doi: 10.1038/nature01131. PMID 12422213. - Tacconi, Luca (February 2003). "Fires in Indonesia: Causes, Costs, and Policy Implications (CIFOR Occasional Paper No. 38)" (PDF). Bogor, Indonesia: Center for International Forestry Research. ISSN 0854-9818. Archived (PDF) from the original on 26 February 2009. Retrieved 6 February 2009. - Baumgardner, D.; et al. (2003). "Warming of the Arctic lower stratosphere by light absorbing particles". American Geophysical Union fall meeting. San Francisco, California. - Glasspool, IJ; Edwards, D; Axe, L (2004). "Charcoal in the Silurian as evidence for the earliest wildfire". Geology. 32 (5): 381–383. Bibcode: 2004Geo....32..381G. doi: 10.1130/G20363.1. - Edwards, D.; Axe, L. (April 2004). "Anatomical Evidence in the Detection of the Earliest Wildfires". PALAIOS. 19 (2): 113–128. Bibcode: 2004Palai..19..113E. doi: 10.1669/0883-1351(2004)019<0113:AEITDO>2.0.CO;2. ISSN 0883-1351. - Scott, C.; Glasspool, J. (Jul 2006). "The diversification of Paleozoic fire systems and fluctuations in atmospheric oxygen concentration" (Free full text). Proceedings of the National Academy of Sciences of the United States of America. 103 (29): 10861–10865. Bibcode: 2006PNAS..10310861S. doi: 10.1073/pnas.0604090103. ISSN 0027-8424. PMC 1544139. PMID 16832054. - Pausas and Keeley, 594 - Historically, the Cenozoic has been divided up into the Quaternary and Tertiary sub-eras, as well as the Neogene and Paleogene periods. The 2009 version of the ICS time chart Archived 29 December 2009 at the Wayback Machine. recognizes a slightly extended Quaternary as well as the Paleogene and a truncated Neogene, the Tertiary having been demoted to informal status. - Pausas and Keeley, 595 - Pausas and Keeley, 596 - "Redwood Trees" Archived 1 September 2015 at the Wayback Machine.. - Pausas and Keeley, 597 - Rackham, Oliver (November–December 2003). "Fire in the European Mediterranean: History". AridLands Newsletter. 54. Archived from the original on 11 October 2008. Retrieved 17 July 2009. - Rackham, 229–230 - Goldammer, Johann G. (5–9 May 1998). "History of Fire in Land-Use Systems of the Baltic Region: Implications on the Use of Prescribed Fire in Forestry, Nature Conservation and Landscape Management". First Baltic Conference on Forest Fires. Radom-Katowice, Poland: Global Fire Monitoring Center (GFMC). Archived from the original on 16 August 2009. - * "Wildland fire – An American legacy|" (PDF). Fire Management Today. 60 (3): 4, 5, 9, 11. Summer 2000. Archived (PDF) from the original on 1 April 2010. Retrieved 31 July 2009. - Fire. The Australian Experience, 7. - Karki, 27. - Meyer, G.A.; Wells, S.G.; Jull, A.J.T. (1995). "Fire and alluvial chronology in Yellowstone National Park: Climatic and intrinsic controls on Holocene geomorphic processes". GSA Bulletin. 107 (10): 1211–1230. Bibcode: 1995GSAB..107.1211M. doi: 10.1130/0016-7606(1995)107<1211:FAACIY>2.3.CO;2. - Pitkänen, et al., 15–16 and 27–30 - J. R. Marlon; P. J. Bartlein; C. Carcaillet; D. G. Gavin; S. P. Harrison; P. E. Higuera; F. Joos; M. J. Power; I. C. Prentice (2008). "Climate and human influences on global biomass burning over the past two millennia". Nature Geoscience. 1 (10): 697–702. Bibcode: 2008NatGe...1..697M. doi: 10.1038/ngeo313. Archived from the original on 22 October 2008. University of Oregon Summary, accessed 2 February 2010 Archived 27 September 2008 at the Wayback Machine. - Stephens, Scott L.; Martin, Robert E.; Clinton, Nicholas E. (2007). "Prehistoric fire area and emissions from California's forests, woodlands, shrublands, and grasslands". Forest Ecology and Management. 251 (3): 205–216. doi: 10.1016/j.foreco.2007.06.005. Retrieved 4 May 2015. - "Researchers Detect a Global Drop in Fires". NASA Earth Observatory. 30 June 2017. Archived from the original on 8 December 2017. Retrieved 4 July 2017. - Andela, N.; Morton, D.C.; et al. (30 June 2017). "A human-driven decline in global burned area". Science. 356 (6345): 1356–1362. Bibcode: 2017Sci...356.1356A. doi: 10.1126/science.aal4108. PMC 6047075. PMID 28663495. Archived from the original on 2 July 2017. Retrieved 4 July 2017. - Fires spark biodiversity criticism of Sweden's forest industry - The Great Lie: Monoculture Trees as Forests - Plant flammability list - Fire-prone plant list - Karki, 6. - van Wagtendonk (1996), 1156. - Interagency Strategy for the Implementation of the Federal Wildland Fire Policy, 42. - San-Miguel-Ayanz, et al., 361. - Karki, 7, 11–19. - "Smokey's Journey". Smokeybear.com. Archived from the original on 6 March 2010. Retrieved 26 January 2010. - "Backburn". MSN Encarta. Archived from the original on 10 July 2009. Retrieved 9 July 2009. - "UK: The Role of Fire in the Ecology of Heathland in Southern Britain". International Forest Fire News. 18: 80–81. January 1998. Archived from the original on 16 July 2011. Retrieved 9 July 2009. - "Prescribed Fires". SmokeyBear.com. Archived from the original on 20 October 2008. Retrieved 21 November 2008. - Karki, 14. - Manning, Richard (1 December 2007). "Our Trial by Fire". onearth.org. Archived from the original on 30 June 2008. Retrieved 7 January 2009. - "Extreme Events: Wild & Forest Fire". National Oceanic and Atmospheric Administration (NOAA). Archived from the original on 14 January 2009. Retrieved 7 January 2009. - San-Miguel-Ayanz, et al., 362. - "An Integration of Remote Sensing, GIS, and Information Distribution for Wildfire Detection and Management" (PDF). Photogrammetric Engineering and Remote Sensing. 64 (10): 977–985. October 1998. Archived from the original (PDF) on 16 August 2009. Retrieved 26 June 2009. - "Radio communication keeps rangers in touch". Canadian Broadcasting Corporation (CBC) Digital Archives. 21 August 1957. Archived from the original on 13 August 2009. Retrieved 6 February 2009. - "Wildfire Detection and Control". Alabama Forestry Commission. Archived from the original on 20 November 2008. Retrieved 12 January 2009. - "Evaluation of three wildfire smoke detection systems", 4 - Fok, Chien-Liang; Roman, Gruia-Catalin & Lu, Chenyang (29 November 2004). "Mobile Agent Middleware for Sensor Networks: An Application Case Study". Washington University in St. Louis. Archived from the original (PDF) on 3 January 2007. Retrieved 15 January 2009. - Chaczko, Z.; Ahmad, F. (July 2005). Wireless Sensor Network Based System for Fire Endangered Areas. Third International Conference on Information Technology and Applications. 2. pp. 203–207. doi: 10.1109/ICITA.2005.313. ISBN 978-0-7695-2316-3. Retrieved 15 January 2009. - "Wireless Weather Sensor Networks for Fire Management". University of Montana – Missoula. Archived from the original on 4 April 2009. Retrieved 19 January 2009. - Solobera, Javier (9 April 2010). "Detecting Forest Fires using Wireless Sensor Networks with Waspmote". Libelium Comunicaciones Distribuidas S.L. Archived from the original on 17 April 2010. - Thomson, Elizabeth A. (23 September 2008). "Preventing forest fires with tree power". Massachusetts Institute of Technology (MIT) News. Archived from the original on 29 December 2008. Retrieved 15 January 2009. - "Evaluation of three wildfire smoke detection systems", 6 - "SDSU Tests New Wildfire-Detection Technology". San Diego, CA: San Diego State University. 23 June 2005. Archived from the original on 1 September 2006. Retrieved 12 January 2009. - San-Miguel-Ayanz, et al., 366–369, 373–375. - Rochester Institute of Technology (4 October 2003). "New Wildfire-detection Research Will Pinpoint Small Fires From 10,000 feet". ScienceDaily. Archived from the original on 5 June 2008. Retrieved 12 January 2009. - "Airborne campaign tests new instrumentation for wildfire detection". European Space Agency. 11 October 2006. Archived from the original on 13 August 2009. Retrieved 12 January 2009. - "World fire maps now available online in near-real time". European Space Agency. 24 May 2006. Archived from the original on 13 August 2009. Retrieved 12 January 2009. - "Earth from Space: California's 'Esperanza' fire". European Space Agency. 11 March 2006. Archived from the original on 10 November 2008. Retrieved 12 January 2009. - "Hazard Mapping System Fire and Smoke Product". National Oceanic and Atmospheric Administration (NOAA) Satellite and Information Service. Archived from the original on 14 January 2009. Retrieved 15 January 2009. - Ramachandran, Chandrasekar; Misra, Sudip & Obaidat, Mohammad S. (9 June 2008). "A probabilistic zonal approach for swarm-inspired wildfire detection using sensor networks". Int. J. Commun. Syst. 21 (10): 1047–1073. doi: 10.1002/dac.937. - Miller, Jerry; Borne, Kirk; Thomas, Brian; Huang Zhenping & Chi, Yuechen. "Automated Wildfire Detection Through Artificial Neural Networks" (PDF). NASA. Archived (PDF) from the original on 22 May 2010. Retrieved 15 January 2009. - Zhang, Junguo; Li, Wenbin; Han, Ning & Kan, Jiangming (September 2008). "Forest fire detection system based on a ZigBee wireless sensor network" (PDF). Frontiers of Forestry in China. 3 (3): 369–374. doi: 10.1007/s11461-008-0054-3. Retrieved 26 June 2009. - Karki, 16 - "China Makes Snow to Extinguish Forest Fire". FOXNews.com. 18 May 2006. Archived from the original on 13 August 2009. Retrieved 10 July 2009. - Ambrosia, Vincent G. (2003). "Disaster Management Applications – Fire" (PDF). NASA-Ames Research Center. Archived (PDF) from the original on 24 July 2009. Retrieved 21 July 2009. - Plucinski, et al., 6 - "Fighting fire in the forest". CBS News. 17 June 2009. Archived from the original on 19 June 2009. Retrieved 26 June 2009. - "Climate of 2008 Wildfire Season Summary". National Climatic Data Center. 11 December 2008. Archived from the original on 23 October 2015. Retrieved 7 January 2009. - Rothermel, Richard C. (May 1993). "General Technical Report INT-GTR-299 – Mann Gulch Fire: A Race That Couldn't Be Won". United States Department of Agriculture, Forest Service, Intermountain Research Station. Archived from the original on 13 August 2009. Retrieved 26 June 2009. - "Victorian Bushfires". Parliament of New South Wales. New South Wales Government. 13 March 2009. Archived from the original on 27 February 2010. Retrieved 26 January 2010. - "Region 5 – Land & Resource Management". www.fs.usda.gov. Archived from the original on 23 August 2016. Retrieved 22 August 2016. - Campbell, Corey; Liz Dalsey. "Wildland Fire Fighting Safety and Health". NIOSH Science Blog. National Institute of Occupational Safety and Health. Archived from the original on 9 August 2012. Retrieved 6 August 2012. - "Wildland Fire Fighting: Hot Tips to Stay Safe and Healthy" (PDF). National Institute for Occupational Safety and Health. Archived (PDF) from the original on 22 March 2014. Retrieved 21 March 2014. - A. Agueda; E. Pastor; E. Planas (2008). "Different scales for studying the effectiveness of long-term forest fire retardants". Progress in Energy and Combustion Science. 24 (6): 782–796. doi: 10.1016/j.pecs.2008.06.001. - Magill, B. "Officials: Fire slurry poses little threat". Coloradoan.com. - Boerner, C.; Coday B.; Noble, J.; Roa, P.; Roux V.; Rucker K.; Wing, A. (2012). "Impact of wildfire in Clear Creek Watershed of the city of Golden's drinking water supply" (PDF). Colorado School of Mines. Archived (PDF) from the original on 12 November 2012. - Eichenseher, T. (2012). "Colorado Wildfires Threaten Water Supplies". National Geographic Daily News. Archived from the original on 10 July 2012. - "Prometheus". Tymstra, C.; Bryce, R.W.; Wotton, B.M.; Armitage, O.B. 2009. Development and structure of Prometheus: the Canadian wildland fire growth simulation Model. Inf. Rep. NOR-X-417. Nat. Resour. Can., Can. For. Serv., North. For. Cent., Edmonton, AB. Archived from the original on 3 February 2011. Retrieved 1 January 2009. - "FARSITE". FireModels.org – Fire Behavior and Danger Software, Missoula Fire Sciences Laboratory. Archived from the original on 15 February 2008. Retrieved 1 July 2009. - G.D. Richards, "An Elliptical Growth Model of Forest Fire Fronts and Its Numerical Solution", Int. J. Numer. Meth. Eng.. 30:1163–1179, 1990. - Finney, 1–3. - Alvarado, et al., 66–68 - "About Oregon wildfire risk". Oregon State University. Archived from the original on 18 February 2013. Retrieved 9 July 2012. - "The National Wildfire Mitigation Programs Database: State, County, and Local Efforts to Reduce Wildfire Risk" (PDF). US Forest Service. Archived (PDF) from the original on 7 September 2012. Retrieved 19 January 2014. - "Extreme wildfires may be fueled by climate change". Michigan State University. 1 August 2013. Archived from the original on 3 August 2013. Retrieved 1 August 2013. - Rajamanickam Antonimuthu (5 August 2014). White House explains the link between Climate Change and Wild Fires. YouTube. Archived from the original on 11 August 2014. - Office of Environmental Health Hazard Assessment (2008). "Wildfire smoke: A guide for public health officials" (PDF). Archived (PDF) from the original on 16 May 2012. Retrieved 9 July 2012. - National Wildlife Coordination Group (2001). "Smoke management guide for prescribed and wildland fire" (PDF). Boise, ID: National Interagency Fire Center. Archived (PDF) from the original on 11 October 2016. - U.S. Environmental Protection Agency (2009). "Air quality index: A guide to air quality and health" (PDF). Archived (PDF) from the original on 7 May 2012. Retrieved 9 July 2012. - Booze, T.F.; Reinhardt, T.E.; Quiring, S.J.; Ottmar, R.D. (2004). "A screening-level assessment of the health risks of chronic smoke exposure for wildland firefighters" (PDF). Journal Occupational and Environmental Hygiene. 1 (5): 296–305. CiteSeerX 10.1.1.541.5076. doi: 10.1080/15459620490442500. PMID 15238338. Archived (PDF) from the original on 30 May 2017. - "CDC – NIOSH Publications and Products – Wildland Fire Fighting: Hot Tips to Stay Safe and Healthy (2013–158)". www.cdc.gov. 2013. doi: 10.26616/NIOSHPUB2013158. Archived from the original on 22 November 2016. Retrieved 22 November 2016. - http://apps.webofknowledge.com/full_record.do?product=UA&search_mode=GeneralSearch&qid=1&SID=3A7lyhAIveCBgjBAcZa&page=2&doc=16[ permanent dead link] - O'Donnell, M H; Behie, A M (November 15, 2015). "Effects of wildfire disaster exposure on male birth weight in an Australian population". Evolution, Medicine, and Public Health. 2015 (1): 344–354. doi: 10.1093/emph/eov027. ISSN 2050-6201. PMC 4697771. PMID 26574560. Retrieved January 27, 2016. - Liu, Jia Coco; Wilson, Ander; Mickley, Loretta J.; Dominici, Francesca; Ebisu, Keita; Wang, Yun; Sulprizio, Melissa P.; Peng, Roger D.; Yue, Xu (2017-01). "Wildfire-specific Fine Particulate Matter and Risk of Hospital Admissions in Urban and Rural Counties". Epidemiology. 28 (1): 77–85. 27648592. Check date values in: - "1 Wildfire Smoke A Guide for Public Health Officials" (PDF). US Environmental Protection Agency. Archived (PDF) from the original on 9 May 2013. Retrieved 19 January 2014. - Forsberg, Nicole T.; Longo, Bernadette M.; Baxter, Kimberly; Boutté, Marie (2012). "Wildfire Smoke Exposure: A Guide for the Nurse Practitioner". The Journal for Nurse Practitioners. 8 (2): 98–106. doi: 10.1016/j.nurpra.2011.07.001. - Wu, Jin-Zhun; Ge, Dan-Dan; Zhou, Lin-Fu; Hou, Ling-Yun; Zhou, Ying; Li, Qi-Yuan (2018-06). "Effects of particulate matter on allergic respiratory diseases". Chronic Diseases and Translational Medicine. 4 (2): 95–102. 29988900. Check date values in: - Hutchinson, Justine A.; Vargo, Jason; Milet, Meredith; French, Nancy H. F.; Billmire, Michael; Johnson, Jeffrey; Hoshiko, Sumi (2018-07-10). "The San Diego 2007 wildfires and Medi-Cal emergency department presentations, inpatient hospitalizations, and outpatient visits: An observational study of smoke exposure periods and a bidirectional case-crossover analysis". PLOS Medicine. 15 (7): e1002601. doi: 10.1371/journal.pmed.1002601. ISSN 1549-1676. PMC 6038982. PMID 29990362. - Liu, Jia Coco; Wilson, Ander; Mickley, Loretta J.; Dominici, Francesca; Ebisu, Keita; Wang, Yun; Sulprizio, Melissa P.; Peng, Roger D.; Yue, Xu (2017-01). "Wildfire-specific Fine Particulate Matter and Risk of Hospital Admissions in Urban and Rural Counties". Epidemiology. 28 (1): 77–85. 27648592. Check date values in: - Wu, Jin-Zhun; Ge, Dan-Dan; Zhou, Lin-Fu; Hou, Ling-Yun; Zhou, Ying; Li, Qi-Yuan (2018-06-08). "Effects of particulate matter on allergic respiratory diseases". Chronic Diseases and Translational Medicine. 4 (2): 95–102. doi: 10.1016/j.cdtm.2018.04.001. ISSN 2095-882X. PMC 6034084. PMID 29988900. - Reid, Colleen E.; Brauer, Michael; Johnston, Fay H.; Jerrett, Michael; Balmes, John R.; Elliott, Catherine T. (2016-04-15). "Critical Review of Health Impacts of Wildfire Smoke Exposure". Environmental Health Perspectives. 124 (9): 1334–43. doi: 10.1289/ehp.1409277. ISSN 0091-6765. PMC 5010409. PMID 27082891. - National Wildfire Coordinating Group (June 2007). "Wildland firefighter fatalities in the United States 1990–2006" (PDF). NWCG Safety and Health Working Team. Archived (PDF) from the original on 15 March 2012. - Papanikolaou, V; Adamis, D; Mellon, RC; Prodromitis, G (2011). "Psychological distress following wildfires disaster in a rural part of Greece: A case-control population-based study". International Journal of Emergency Mental Health. 13 (1): 11–26. PMID 21957753. - Mellon, Robert C.; Papanikolau, Vasiliki; Prodromitis, Gerasimos (2009). "Locus of control and psychopathology in relation to levels of trauma and loss: Self-reports of Peloponnesian wildfire survivors". Journal of Traumatic Stress. 22 (3): 189–96. doi: 10.1002/jts.20411. PMID 19452533. - Marshall, G. N.; Schell, T. L.; Elliott, M. N.; Rayburn, N. R.; Jaycox, L. H. (2007). "Psychiatric Disorders Among Adults Seeking Emergency Disaster Assistance After a Wildland-Urban Interface Fire". Psychiatric Services. 58 (4): 509–14. doi: 10.1176/appi.ps.58.4.509. PMID 17412853. - McDermott, BM; Lee, EM; Judd, M; Gibbon, P (2005). "Posttraumatic stress disorder and general psychopathology in children and adolescents following a wildfire disaster". Canadian Journal of Psychiatry. 50 (3): 137–43. doi: 10.1177/070674370505000302. PMID 15830823. - Jones, RT; Ribbe, DP; Cunningham, PB; Weddle, JD; Langley, AK (2002). "Psychological impact of fire disaster on children and their parents". Behavior Modification. 26 (2): 163–86. doi: 10.1177/0145445502026002003. PMID 11961911. - Leader, Jessica (21 September 2012). "Idaho Wildfire: Radiation Raises Slight Concern As Blaze Hits Former Uranium, Gold Mines". Huffington Post. Archived from the original on 26 September 2012. - "Particulate Matter (PM) Standards". EPA. 24 April 2016. Archived from the original on 15 August 2012. - Sutherland, E. Rand; Make, Barry J.; Vedal, Sverre; Zhang, Lening; Dutton, Steven J.; Murphy, James R.; Silkoff, Philip E. (2005). "Wildfire smoke and respiratory symptoms in patients with chronic obstructive pulmonary disease". Journal of Allergy and Clinical Immunology. 115 (2): 420–2. doi: 10.1016/j.jaci.2004.11.030. PMID 15696107. - Delfino, R J; Brummel, S; Wu, J; Stern, H; Ostro, B; Lipsett, M; Winer, A; Street, D H; Zhang, L; Tjoa, T; Gillen, D L (2009). "The relationship of respiratory and cardiovascular hospital admissions to the southern California wildfires of 2003". Occupational and Environmental Medicine. 66 (3): 189–97. doi: 10.1136/oem.2008.041376. PMC 4176821. PMID 19017694. - Kunzli, N.; Avol, E.; Wu, J.; Gauderman, W. J.; Rappaport, E.; Millstein, J.; Bennion, J.; McConnell, R.; Gilliland, F. D.; Berhane, Kiros; Lurmann, Fred; Winer, Arthur; Peters, John M. (2006). "Health Effects of the 2003 Southern California Wildfires on Children". American Journal of Respiratory and Critical Care Medicine. 174 (11): 1221–8. doi: 10.1164/rccm.200604-519OC. PMC 2648104. PMID 16946126. - Holstius, David M.; Reid, Colleen E.; Jesdale, Bill M.; Morello-Frosch, Rachel (2012). "Birth Weight Following Pregnancy During the 2003 Southern California Wildfires". Environmental Health Perspectives. 120 (9): 1340–5. doi: 10.1289/ehp.1104515. PMC 3440113. PMID 22645279. - Johnston, Fay H.; et al. (May 2012). "Estimated global mortality attributable to smoke from landscape fires" (PDF). Environmental Health Perspectives. 120 (5): 695–701. doi: 10.1289/ehp.1104422. PMC 3346787. PMID 22456494. Archived (PDF) from the original on 22 May 2016. |Wikimedia Commons has media related to Wildfire.| - Alvarado, Ernesto; Sandberg, David V; Pickford, Stewart G (Special Issue 1998). "Modeling Large Forest Fires as Extreme Events" (PDF). Northwest Science. 72: 66–75. Archived from the original (PDF) on 26 February 2009. Retrieved 6 February 2009. - "Are Big Fires Inevitable? A Report on the National Bushfire Forum" (PDF). Parliament House, Canberra: Bushfire CRC. 27 February 2007. Archived from the original (PDF) on 26 February 2009. Retrieved 9 January 2009. - "Automatic remote surveillance system for the prevention of forest fires" (PDF). Council of Australian Governments (COAG) Inquiry on Bushfire Mitigation and Management. Archived from the original (PDF) on 15 May 2009. Retrieved 10 July 2009. - Billing, P (June 1983). "Otways Fire No. 22 – 1982/83 Aspects of fire behaviour. Research Report No.20" (PDF). Victoria Department of Sustainability and Environment. Retrieved 26 June 2009. - de Souza Costa, Fernando; Sandberg, David (2004). "Mathematical model of a smoldering log" (PDF). Combustion and Flame (139): 227–238. Retrieved 6 February 2009. - "Evaluation of three wildfire smoke detection systems" (PDF). Advantage. 5 (4). June 2004. Archived from the original (PDF) on 26 February 2009. Retrieved 13 January 2009. - "Federal Fire and Aviation Operations Action Plan" (PDF). National Interagency Fire Center. 18 April 2005. Retrieved 26 June 2009. - Finney, Mark A (March 1998). "FARSITE: Fire Area Simulator—Model Development and Evaluation" (PDF). US Forest Service. Archived from the original (PDF) on 26 February 2009. Retrieved 5 February 2009. - "Fire. The Australian Experience" (PDF). NSW Rural Fire Service. Archived from the original (PDF) on 22 July 2008. Retrieved 4 February 2009. - "Glossary of Wildland Fire Terminology" (PDF). National Wildfire Coordinating Group. November 2008. Retrieved 18 December 2008. ( HTML version) - Graham, Russell; McCaffrey, Sarah; Jain, Theresa B (April 2004). "Science Basis for Changing Forest Structure to Modify Wildfire Behavior and Severity" (2.79 MB PDF). General Technical Report RMRS-GTR-120. Fort Collins, CO: United States Department of Agriculture, Forest Service, Rocky Mountain Research Station. Retrieved 6 February 2009. - Grove, A T; Rackham, Oliver (2001). The Nature of Mediterranean Europe: An Ecological History. New Haven, CT: Yale University Press. ISBN 978-0300100556. Retrieved 17 July 2009. - Karki, Sameer (2002). "Community Involvement in and Management of Forest Fires in South East Asia" (PDF). Project FireFight South East Asia. Archived from the original (PDF) on 30 July 2007. Retrieved 13 February 2009. - Fire intensity, fire severity and burn severity: a brief review and suggested usage [PDF]. International Journal of Wildland Fire. 2009;18(1):116–26. doi: 10.1071/WF07049. - "Interagency Strategy for the Implementation of Federal Wildland Fire Management Policy" (PDF). National Interagency Fire Council. 20 June 2003. Archived from the original (PDF) on 14 May 2009. Retrieved 21 December 2008. - Lyons, John W (6 January 1971). The Chemistry and Uses of Fire Retardants. United States of America: John Wiley & Sons, Inc. ISBN 978-0-471-55740-1. - Martell, David L; Sun, Hua (2008). "The impact of fire suppression, vegetation, and weather on the area burned by lightning-caused forest fires in Ontario" (PDF). Canadian Journal of Forest Research. 38 (6): 1547–1563. doi: 10.1139/X07-210. Archived from the original (PDF) on 25 March 2009. Retrieved 26 June 2009. - Climatic change, wildfire, and conservation [PDF]. Conservation Biology. 2004;18(4):890–902. doi: 10.1111/j.1523-1739.2004.00492.x. - "National Wildfire Coordinating Group Communicator's Guide for Wildland Fire Management: Fire Education, Prevention, and Mitigation Practices, Wildland Fire Overview" (PDF). National Wildfire Coordinating Group. Archived from the original (PDF) on 17 September 2008. Retrieved 11 December 2008. - Nepstad, Daniel C (2007). "The Amazon's Vicious Cycles: Drought and Fire in the Greenhouse" (PDF). World Wide Fund for Nature (WWF International). Retrieved 9 July 2009. - Olson, Richard Stuart; Gawronski, Vincent T (2005). "The 2003 Southern California Wildfires: Constructing Their Cause(s)" (PDF). Quick Response Research Report. 173. Retrieved 15 July 2009. ( HTML version) - Pausas, Juli G; Keeley, Jon E (July–August 2009). "A Burning Story: The Role of Fire in the History of Life" (PDF). BioScience. 59 (7): 593–601. doi: 10.1525/bio.2009.59.7.10. ISSN 0006-3568. - Peuch, Eric (26–28 April 2005). "Firefighting Safety in France" (PDF). In Butler, B W; Alexander, M E. Eighth International Wildland Firefighter Safety Summit – Human Factors – 10 Years Later (PDF). Missoula, Montana: The International Association of Wildland Fire, Hot Springs, South Dakota. Archived from the original (PDF) on 28 September 2007. Retrieved 27 September 2007. - Pitkänen, Aki; Huttunen, Pertti; Jungner, Högne; Meriläinen, Jouko; Tolonen, Kimmo (28 February 2003). "Holocene fire history of middle boreal pine forest sites in eastern Finland" (PDF). Annales Botanici Fennici. 40: 15–33. ISSN 0003-3847. - Plucinski, M; Gould, J; McCarthy, G; Hollis, J (June 2007). The Effectiveness and Efficiency of Aerial Firefighting in Australia: Part 1 (PDF) (Report). Bushfire Cooperative Research Centre. ISBN 0-643-06534-2. Retrieved 4 March 2009. - San-Miguel-Ayanz, Jesus; Ravail, Nicolas; Kelha, Vaino; Ollero, Anibal (2005). "Active Fire Detection for Fire Emergency Management: Potential and Limitations for the Operational Use of Remote Sensing" (PDF). Natural Hazards. 35 (3): 361–376. CiteSeerX 10.1.1.475.880. doi: 10.1007/s11069-004-1797-2. Archived from the original (PDF) on 20 March 2009. Retrieved 5 March 2009. - van Wagtendonk, Jan W (1996). "Use of a Deterministic Fire Growth Model to Test Fuel Treatments" (PDF). Sierra Nevada Ecosystem Project: Final Report to Congress, Vol. II, Assessments and Scientific Basis for Management Options: 1155–1166. Retrieved 5 February 2009. - van Wagtendonk, Jan W (2007). "The History and Evolution of Wildland Fire Use" (PDF). Fire Ecology. 3 (2): 3–17. doi: 10.4996/fireecology.0302003. Retrieved 24 August 2008. (U.S. Government public domain material published in Association journal. See WERC Highlights – April 2008)
Scientists funded by the European Space Agency believe they may have measured the gravitational equivalent of a magnetic field for the first time in a laboratory. Under certain special conditions the effect is much larger than expected from general relativity and could help physicists to make a significant step towards the long-sought-after quantum theory of gravity. Just as a moving electrical charge creates a magnetic field, so a moving mass generates a gravitomagnetic field. According to Einstein's Theory of General Relativity, the effect is virtually negligible. However, Martin Tajmar, ARC Seibersdorf Research GmbH, Austria, and colleagues believe they have measured the effect in a laboratory. Their experiment involves a ring of superconducting material rotating up to 6 500 times a minute. Superconductors are special materials that lose all electrical resistance at a certain temperature. Spinning superconductors produce a weak magnetic field, the so-called London moment. The new experiment tests a conjecture that explains the difference between high-precision mass measurements of Cooper-pairs (the current carriers in superconductors) and their prediction via quantum theory. They have discovered that this anomaly could be explained by the appearance of a gravitomagnetic field in the spinning superconductor (This effect has been named the Gravitomagnetic London Moment by analogy with its magnetic counterpart). Small acceleration sensors placed at different locations close to the spinning superconductor, which has to be accelerated for the effect to be noticeable, recorded an acceleration field outside the superconductor that appears to be produced by gravitomagnetism. "This experiment is the gravitational analogue of Faraday's electromagnetic induction experiment in 1831. It demonstrates that a superconductive gyroscope is capable of generating a powerful gravitomagnetic field, and is therefore the gravitational counterpart of the magnetic coil. Depending on further confirmation, this effect could form the basis for a new technological domain, which would have numerous applications in space and other high-tech sectors" says ESA study manager Clovis de Matos. Although just 100 millionths of the acceleration due to the Earth’s gravitational field, the measured field is a surprising one hundred million trillion times larger than Einstein’s General Relativity predicts. Initially, the researchers were reluctant to believe their own results. "We ran more than 250 experiments, improved the facility over 3 years and discussed the validity of the results for 8 months before making this announcement. Now we are confident about the measurement," says Tajmar, who performed the experiments and hopes that other physicists will conduct their own versions of the experiment in order to verify the findings and rule out a facility induced effect. In parallel to the experimental evaluation of their conjecture, Tajmar and team also looked for a more refined theoretical model of the Gravitomagnetic London Moment. They took their inspiration from superconductivity. The electromagnetic properties of superconductors are explained in quantum theory by assuming that force-carrying particles, known as photons, gain mass. By allowing force-carrying gravitational particles, known as the gravitons, to become heavier, they found that the unexpectedly large gravitomagnetic force could be modelled. "If confirmed, this would be a major breakthrough," says Tajmar, "it opens up a new means of investigating general relativity and it consequences in the quantum world." The results, obtained in the framework of an ESA contract, were presented at a one-day conference at ESA's European Space and Technology Research Centre (ESTEC), in the Netherlands, 21 March 2006. Two papers detailing the work are now being considered for publication. The papers can be accessed on-line at the Los Alamos pre-print server using the references: gr-qc/0603033 and gr-qc/0603032. For more detailed information, please contact: Dipl-Ing Dr Martin Tajmar Head of Business Field Space Propulsion ARC Seibersdorf research GmbH Phone: +43 (0)5 05 50 31 42 Fax: +43 (0)5 05 50 33 66 Email: martin.tajmar @ arcs.ac.at Mr Clovis J. de Matos General Studies Officer European Space Agency ESA-HQ Advanced Concepts and Studies Office - EUI-AC 8-10 Rue Mario Nikis 75738 Paris Cedex 15 Tel: +33 (0)1 53 69 74 98 Fax: +33 (0)1 53 69 76 51 Email: clovis.de.matos @ esa.int
Ladder logic is the main programming method used for PLCs. As mentioned before, ladder logic has been developed to mimic relay logic. The decision to use the relay logic diagrams was a strategic one. By selecting ladder logic as the main programming method, the amount of retraining needed for engineers and tradespeople was greatly reduced. Modern control systems still include relays, but these are rarely used for logic. A relay is a simple device that uses a magnetic field to control a switch, as pictured in Figure 1.1 Simple Relay Layouts and Schematics. When a voltage is applied to the input coil, the resulting current creates a magnetic field. The magnetic field pulls a metal switch (or reed) towards it and the contacts touch, closing the switch. The contact that closes when the coil is energized is called normally open. The normally closed contacts touch when the input coil is not energized. Relays are normally drawn in schematic form using a circle to represent the input coil. The output contacts are shown with two parallel lines. Normally open contacts are shown as two lines, and will be open (non-conducting) when the input is not energized. Normally closed contacts are shown with two lines with a diagonal line through them. When the input coil is not energized the normally closed contacts will be closed (conducting). Relays are used to let one power source close a switch for another (often high current) power source, while keeping them isolated. An example of a relay in a simple control application is shown in Figure 1.1 A Simple Relay Controller. In this system the first relay on the left is used as normally closed, and will allow current to flow until a voltage is applied to the input A. The second relay is normally open and will not allow current to flow until a voltage is applied to the input B. If current is flowing through the first two relays then current will flow through the coil in the third relay, and close the switch for output C. This circuit would normally be drawn in the ladder logic form. This can be read logically as C will be on if A is off and B is on. The example in Figure 1.1 A Simple Relay Controller does not show the entire control system, but only the logic. When we consider a PLC there are inputs, outputs, and the logic. Figure 1.1 A PLC Illustrated With Relays shows a more complete representation of the PLC. Here there are two inputs from push buttons. We can imagine the inputs as activating 24V DC relay coils in the PLC. This in turn drives an output relay that switches 115V AC, that will turn on a light. Note, in actual PLCs inputs are never relays, but outputs are often relays. The ladder logic in the PLC is actually a computer program that the user can enter and change. Notice that both of the input push buttons are normally open, but the ladder logic inside the PLC has one normally open contact, and one normally closed contact. Do not think that the ladder logic in the PLC needs to match the inputs or outputs. Many beginners will get caught trying to make the ladder logic match the input types. Many relays also have multiple outputs (throws) and this allows an output relay to also be an input simultaneously. The circuit shown in Figure 1.1 A Seal-in Circuit is an example of this, it is called a seal in circuit. In this circuit the current can flow through either branch of the circuit, through the contacts labelled A or B. The input B will only be on when the output B is on. If B is off, and A is energized, then B will turn on. If B turns on then the input B will turn on, and keep output B on even if input A goes off. After B is turned on the output B will not turn off. The first PLCs were programmed with a technique that was based on relay logic wiring schematics. This eliminated the need to teach the electricians, technicians and engineers how to program a computer - but, this method has stuck and it is the most common technique for programming PLCs today. An example of ladder logic can be seen in Figure 1.1 A Simple Ladder Logic Diagram. To interpret this diagram imagine that the power is on the vertical line on the left hand side, we call this the hot rail. On the right hand side is the neutral rail. In the figure there are two rungs, and on each rung there are combinations of inputs (two vertical lines) and outputs (circles). If the inputs are opened or closed in the right combination the power can flow from the hot rail, through the inputs, to power the outputs, and finally to the neutral rail. An input can come from a sensor, switch, or any other type of sensor. An output will be some device outside the PLC that is switched on or off, such as lights or motors. In the top rung the contacts are normally open and normally closed. Which means if input A is on and input B is off, then power will flow through the output and activate it. Any other combination of input values will result in the output X being off. The second rung of Figure 1.1 A Simple Ladder Logic Diagram is more complex, there are actually multiple combinations of inputs that will result in the output Y turning on. On the left most part of the rung, power could flow through the top if C is off and D is on. Power could also (and simultaneously) flow through the bottom if both E and F are true. This would get power half way across the rung, and then if G or H is true the power will be delivered to output Y. In later chapters we will examine how to interpret and construct these diagrams. There are other methods for programming PLCs. One of the earliest techniques involved mnemonic instructions. These instructions can be derived directly from the ladder logic diagrams and entered into the PLC through a simple programming terminal. An example of mnemonics is shown in Figure 1.1 An Example of a Mnemonic Program and Equivalent Ladder Logic. In this example the instructions are read one line at a time from top to bottom. The first line 00000 has the instruction LDN (input load and not) for input A. This will examine the input to the PLC and if it is off it will remember a 1 (or true), if it is on it will remember a 0 (or false). The next line uses an LD (input load) statement to look at the input. If the input is off it remembers a 0, if the input is on it remembers a 1 (note: this is the reverse of the LDN). The AND statement recalls the last two numbers remembered and if the are both true the result is a 1, otherwise the result is a 0. This result now replaces the two numbers that were recalled, and there is only one number remembered. The process is repeated for lines 00003 and 00004, but when these are done there are now three numbers remembered. The oldest number is from the AND, the newer numbers are from the two LD instructions. The AND in line 00005 combines the results from the last LD instructions and now there are two numbers remembered. The OR instruction takes the two numbers now remaining and if either one is a 1 the result is a 1, otherwise the result is a 0. This result replaces the two numbers, and there is now a single number there. The last instruction is the ST (store output) that will look at the last value stored and if it is 1, the output will be turned on, if it is 0 the output will be turned off. The ladder logic program in Figure 1.1 An Example of a Mnemonic Program and Equivalent Ladder Logic, is equivalent to the mnemonic program. Even if you have programmed a PLC with ladder logic, it will be converted to mnemonic form before being used by the PLC. In the past mnemonic programming was the most common, but now it is uncommon for users to even see mnemonic programs. Sequential Function Charts (SFCs) have been developed to accommodate the programming of more advanced systems. These are similar to flowcharts, but much more powerful. The example seen in Figure 1.1 An Example of a Sequential Function Chart is doing two different things. To read the chart, start at the top where is says start. Below this there is the double horizontal line that says follow both paths. As a result the PLC will start to follow the branch on the left and right hand sides separately and simultaneously. On the left there are two functions the first one is the power up function. This function will run until it decides it is done, and the power down function will come after. On the right hand side is the flash function, this will run until it is done. These functions look unexplained, but each function, such as power up will be a small ladder logic program. This method is much different from flowcharts because it does not have to follow a single path through the flowchart. Structured Text programming has been developed as a more modern programming language. It is quite similar to languages such as BASIC. A simple example is shown in Figure 1.1 An Example of a Structured Text Program. This example uses a PLC memory location i. This memory location is for an integer, as will be explained later in the book. The first line of the program sets the value to 0. The next line begins a loop, and will be where the loop returns to. The next line recalls the value in location i, adds 1 to it and returns it to the same location. The next line checks to see if the loop should quit. If i is greater than or equal to 10, then the loop will quit, otherwise the computer will go back up to the REPEAT statement continue from there. Each time the program goes through this loop i will increase by 1 until the value reaches 10. 1.1.2 PLC Connections When a process is controlled by a PLC it uses inputs from sensors to make decisions and update outputs to drive actuators, as shown in Figure 1.1 The Separation of Controller and Process. The process is a real process that will change over time. Actuators will drive the system to new states (or modes of operation). This means that the controller is limited by the sensors available, if an input is not available, the controller will have no way to detect a condition. The control loop is a continuous cycle of the PLC reading inputs, solving the ladder logic, and then changing the outputs. Like any computer this does not happen instantly. Figure 1.1 The Scan Cycle of a PLC shows the basic operation cycle of a PLC. When power is turned on initially the PLC does a quick sanity check to ensure that the hardware is working properly. If there is a problem the PLC will halt and indicate there is an error. For example, if the PLC power is dropping and about to go off this will result in one type of fault. If the PLC passes the sanity check it will then scan (read) all the inputs. After the inputs values are stored in memory the ladder logic will be scanned (solved) using the stored values - not the current values. This is done to prevent logic problems when inputs change during the ladder logic scan. When the ladder logic scan is complete the outputs will be scanned (the output values will be changed). After this the system goes back to do a sanity check, and the loop continues indefinitely. Unlike normal computers, the entire program will be run every scan. Typical times for each of the stages is in the order of milliseconds. 1.1.3 Ladder Logic Inputs PLC inputs are easily represented in ladder logic. In Figure 1.1 Ladder Logic Inputs there are three types of inputs shown. The first two are normally open and normally closed inputs, discussed previously. The IIT (Immediate InpuT) function allows inputs to be read after the input scan, while the ladder logic is being scanned. This allows ladder logic to examine input values more often than once every cycle. (Note: This instruction is not available on the ControlLogix processors, but is still available on older models.) 1.1.4 Ladder Logic Outputs In ladder logic there are multiple types of outputs, but these are not consistently available on all PLCs. Some of the outputs will be externally connected to devices outside the PLC, but it is also possible to use internal memory locations in the PLC. Six types of outputs are shown in Figure 1.1 Ladder Logic Outputs. The first is a normal output, when energized the output will turn on, and energize an output. The circle with a diagonal line through is a normally on output. When energized the output will turn off. This type of output is not available on all PLC types. When initially energized the OSR (One Shot Relay) instruction will turn on for one scan, but then be off for all scans after, until it is turned off. The L (latch) and U (unlatch) instructions can be used to lock outputs on. When an L output is energized the output will turn on indefinitely, even when the output coil is deenergized. The output can only be turned off using a U output. The last instruction is the IOT (Immediate OutpuT) that will allow outputs to be updated without having to wait for the ladder logic scan to be completed.
Download 2Nd Math Worksheets Photos If you want your student to practice conversions between measuring units in second grade. Free second grade worksheets and games including, phonics, grammar, couting games, counting worksheets, addition online practice,subtraction online practice, multiplication online practice, hundreds charts, math worksheets generator, free math work 2nd grade reading and writing worksheets. Scholastic has nearly 2,000 math worksheets, practice pages, activities, and more for grade 2. This is a collection of math worksheets for grade 2, organized by topics such as comparing, rounding, place value, addition, subtraction, adding and subtracting in columns, mental math, multiplication, division, measuring, and geometry. Learn addition, subtraction, time, measurement, and more! Each topic is a link to loads of worksheets under the same category. Each topic is a link to loads of worksheets under the same category. These worksheets take the form of printable math tests which students can use both for homework or classroom activities. According to the common core standards, in grade 2 instructional time should focus on four critical. Addition, subtraction, place value, numbers, comparing numbers, bar graphs, pictographs, multiplication worksheets. Introduce second graders to these worksheets and watch the fireworks for yourself! Our free math worksheets for grade 2 kids definitely need to be added to your. Each math worksheet has an answer sheet attached remember these are in printable pdf format. Skip counting, addition, subtraction, place value, multiplication, division, fractions our grade 2 math worksheets emphasize numeracy as well as a conceptual understanding of math concepts. Timed worksheets are also a great way to encourage kids to solve problems faster. 2nd grade online math worksheets. In second grade, children learn many new concept and also. Collection by printables & worksheets. Number and place value along with the basics of multiplication and division are just some of the concepts solidified at this. Each topic is a link to loads of worksheets under the same category. Free 2nd grade math worksheets. The worksheets support any second grade math program, but go especially well with ixl's 2nd grade math curriculum. Club these printable grade 2 worksheets with math board games to get more than 20 x practice. Jumpstart's 2nd grade math worksheets excel in getting kids to befriend numbers. 2nd grade math worksheets arranged by grade 2 topics. Engage them with worksheets on different math topics and watch their math grades go up in no time. Read the word problem to get an idea of its general nature. The mad minutes 2nd grade get progressively more difficult to help kids review all math learned in 2nd grade. Freewhen you add, you combine two or more numbers together to get one. Addition worksheets, subtraction worksheets, place value worksheets, money worksheets, time. Math chimp has great math worksheets for 2nd grade students. Free 2nd grade math worksheets. Fun math worksheets, activities and games for grade 2. Please help us spread the word this will take you to the individual page of the worksheet. 2nd grade math worksheets, pdf printables on: Teachers can supplement their course and send out several. Here you will find our selection of mental math worksheets which will help your child to practice their number, place value and problem solving skills.
Presentation on theme: "Chemistry Notes: Titrations Chemistry 2014-2015. A titration is a lab procedure which uses a solution of known concentration to determine the concentration."— Presentation transcript: A titration is a lab procedure which uses a solution of known concentration to determine the concentration of an unknown solution. This is accomplished by putting one of the solutions in a flask, and filling a buret with the other solution. The stopcock on the buret allows you to slowly add one solution to the other until the reaction reaches an endpoint, the conclusion of the reaction. Acid-Base Titrations The endpoint is often observed by a change in color. This is usually determined using an indicator during acid-base titration. For example, if you fill the flask with HCl of unknown concentration and a few drops of phenolphthalein, and fill the buret with 1.0 M NaOH, you can let just enough NaOH react with the HCl to give a pink color (phenolphthalein is pink in base). The equation for this reaction is HCl + NaOH NaCl + H 2 O. Let’s say it takes 25.0 mL of the 1.0 M NaOH to neutralize 20.0 mL of the unknown HCl. 0.025 L x 1.0 M = 0.025 moles NaOH You can use the data from a titration to determine the concentration of the unknown solution. If you look at the reaction equation, there is a 1:1 ratio of HCl:NaOH (the coefficient for both reactants is 1). If it took 0.025 moles NaOH to neutralize the HCl, there must have been 0.025 moles HCl in the flask. Remember that the flask contained 20.0 mL of HCl, so 0.025 moles / 0.020 L = 1.25 M HClIn other words, the concentration of the HCl is 1.25 M. HCl + NaOH NaCl + H 2 O A simpler way to carry out this kind of calculation is to use the equation M 1 V 1 = M 2 V 2, where M = molarity and V = volume. For this problem, we would have 1.0 M NaOH x 25.0 mL NaOH = Molarity of HCl x 20.0 mL HCl or (1.o)(25.0) = 20.0x If we divide both sides by 20.0 mL HCl, we get molarity of HCl = 1.25 M. It can be easier! Note that this only works if we use a monoprotic acid—an acid that has one proton, H + --and a monobasic base—a base that has one hydroxide ion, OH -. We’ll only give you problems like that so you can use M 1 V 1 = M 2 V 2 to keep things from getting too complicated. M 1 V 1 = M 2 V 2 You have 50.0 mL of an unknown HCl solution. It takes 80.0 mL of 0.50 M NaOH to neutralize this acid. What is the concentration of the hydrochloric acid solution? Titration Practice
Two masses m and 2m are suspended together by a mass less of spring An object of a given mass m subjected to forces F 1, F 2, F 3, … will undergo an acceleration a given by: a = F net /m where F net = F 1 + F 2 + F 3 + … The mass m is positive, force and acceleration are in the same direction. NewtonÕs Second Law of Motion Center of Mass for Particles. The center of mass is the point at which all the mass can be considered to be "concentrated" for the purpose of calculating the "first moment", i.e., mass times distance. For two masses this distance is calculated from . For the more general collection of N particles this becomes. and when extended to three dimensions: 26. The time period of mass suspended from a spring is T. If the spring is cut into four equal parts and the same mass is suspended from one of the parts, then the new time period will be (1) T 4 (2) T (3) T 2 (4) 2T. Sol. Answer (3) T = 2. m k. The k of spring becomes 4 k when cut. T = 2. m 4k. or T = T 2. 27. The Bode plot is more complex, showing the phase and magnitude of the motion of each mass, for the two cases, relative to F 1. In the plots at right, the black line shows the baseline response (m 2 = 0). Now considering m 2 = m 1 / 10, the blue line shows the motion of the damping mass and the red line shows the motion of the primary mass. The ... Two balls, each with mass 2 kg, and velocities of 2 m/s and 3 m/s collide head on. Their final velocities are 2 m/s and 1 m/s, respectively. Is this collision elastic or inelastic? To check for elasticity, we need to calculate the kinetic energy both before and after the collision. Before the collision, the kinetic energy is (2)(2) 2 + (2)(3) 2 ... This problem seems quite complicated, and the key to simplifying it is 1. list the givens 2. draw a picture 3. understand the behavior of the spring 4. write the relevant relationships 5. write the specific equations for the question 6. solve 7. e... A particle of mass 4M is at the origin while a particle of mass M is located at x = 1 m. A third particle of mass m is located somewhere in the vicinity of the other two particles. Assume that gravitational forces between particles are the only interactions involved in this problem (in other words, assume that these three particles are located ... In the arrangement of Fig. 1.9 the masses m 0, m 1, and m 2 of bodies are equal, the masses of the pulley and the threads are negligible, and there is no friction in the pulley. Find the acceleration w with which the body m 0 comes down, and the tension of the thread binding together the bodies m 1 and m 2 , if the coefficient of friction ... Two monkeys P and Q of masses M and m (>M) hold a light rope passing over a smooth fixed pulley. P and Q climb up the rope so that the acceleration of Q upward is double that of P downward. The tension in the rope is A) g m M Mm 2 B) g m M 2 Mm 3 C) g m 2 M Mm 3 D) g m 2 M 2 Mm 7. A block of mass m is placed over a block B of mass 2m. E. the motion was started by compressing the spring ans: D 22. A 2.0-kg mass is attached to one end of a spring with a spring constant of 100N/m and a 4.0-kg mass is attached to the other end. The masses are placed on a horizontal frictionless surface and the spring is compressed 10cm. The spring is then released with the masses at rest and To measure T, a mass m = 0.182 kg was suspended at the free end of the spring, and it was elongated by a length Δx = 1.5×10 - 2 m from the equilibrium position, and then it was allowed to oscillate freely. spring is exerting an upward force that is greater than the downward force due to the weight of the box. Suppose the spring has a spring constant of 450N/m and the box has a mass of 1.5kg. The speed of the box just before it makes contact with the spring is 0.49m/s. PROBLEM 03 – 0402: Two bodies A and B each of mass m are fixed together by a mass less spring A force F acts on the mass B as shown in The location of the mass-spring system doesn’t have any effect on its frequency of oscillation, but the mass 2M attached to the spring does: € f mass−spring = 1 2π k m f $ = 1 2π k 2m = f 2 Note that changing the location of a pendulum does affect the frequency of its oscillation, according to the equation € f pendulum = 1 2π L g 6 ... Mar 03, 2015 · The LIGO Scientific Collaboration , J Aasi 1, B P Abbott 1, R Abbott 1, T Abbott 2, M R Abernathy 1, K Ackley 3, C Adams 4, T Adams 5,6, P Addesso 7, R X Adhikari 1, V Adya 8, C Affeldt 8, N Aggarwal 9, O D Aguiar 10, A Ain 11, P Ajith 12, A Alemic 13, B Allen 14,15, D Amariutei 3, S B Anderson 1, W G Anderson 15, K Arai 1, M C Araya 1, C ... Two masses m1 and m2 are suspended together by a massless spring of constant k. When the masses are in equilibrium, m1 is removed without disturbing the system. The amplitude of oscillations is A block of mass m = 2.0 kg is dropped from height h = 40 cm onto a spring of spring constant k = 1960 N/m. Find the maximum distance the spring is compressed. 14. The drawing shows two 4.5-kg balls located on the y-axis at 1.0 and 9.0 m, respectively, and a third ball with a mass 2.3 kg which is located at 6.0 m. What is the location of the center of mass of this system? A) 4.8 m E) 6.4 m B) 5.2 m F) 4.4 m C) 5.6 m G) 4.0 m D) 6.0 m mH) 0.0 m = 15. A mother is holding her 4.5-kg A bullet with mass m hits a ballistic pendulum with length L and mass M and lodges in it. When the bullet hits the pendulum it swings up from the equilibrium position and reaches an angle α at its maximum. As a simple example, consider a system of only two particles, of masses m and 2m, separated by a distance . Choose the coordinate system so that the less massive particle is at the origin and the other is at x= , as shown in the drawing. Then we have 1 m=m, m 2=2m, 1 x=0, x 2= . (The y and z coordinates are zero of course.) We find from the m M A projectile of mass mmoving horizontally with speed vstrikes a stationary block M held in place by two stiff rods of length L. v During the collision, what qualities about the mass/block system are conserved? A. Its momentum B. Its mechanical energy C. Both its momentum and its mechanical energy D. Neither its momentum or its mechanical ... let us take the mass m 2 to be initially at rest. Of course, mass m 2 could be moving in the same direction as mass m 1 or mass m 2 could be moving directly at mass m 1. We’ll worry about that m 3m m 3m v v 3m Figure 3: Two masses on a horizontal surface for Example 2. One student bangs two bricks together. ... The spring has a spring constant of 1 600 N/m ... force acting on the glider by adding more masses to the mass holder. ... A 4.0 kg mass is attached to one end of a rope 2 m long. If the mass is swung in a vertical circle from the free end of the rope, what is the tension in the rope when the mass is at its highest point if it is moving with a speed of 5 m/s? (A) 5.4 N (B) 10.8 N (C) 21.6 N (D) 50 N (E) 65.4 N 29. A ball of mass m is fastened to a string. We discuss the classical motion of a finite mass spring coupled to two pointlike masses fixed at its ends. A general approach to the problem is presented and some general results are obtained. Therefore, t 3.5 s or t 33.5 s Checking the Units for t m s m s s m m 2 2 m 2 s m s We use the positive value because time cannot be negative. Therefore, t 3.5 s. It takes Jane Bond 3.5 s to run 12 m. example 7 m s m 2 s 2 m s A multiple-step problem 2 m s s m 2 s Example: System of two point masses Intuitively, the center of mass of the two masses shown in figure ?? is between the two masses and closer to the larger one. Referring to O G r 1 r G r 2 m2 m1 m2 m1+m2 (r 2 − r 1) Figure 2.68: Center of mass of a system consisting of two points. (Filename:tfigure3.com.twomass) equation ??, r cm = r imi ... Two objects that have equal masses head toward one another at equal speeds and then stick together. Their total internal kinetic energy is initially 1 2 m v 2 + 1 2 m v 2 = m v 2 1 2 m v 2 + 1 2 m v 2 = m v 2. The two objects come to rest after sticking together, conserving momentum. But the internal kinetic energy is zero after the collision. The mass integration constant has units of kg 2 m −2 and depends solely on the mass distributions within the FMs, the TMs and their relative positions in the two states. The size of the gravitational signal s / g ≈785 μg was determined with an accuracy of 14.3 ng using a modified commercial mass comparator (Mettler Toledo AT1006 1 ). Two blocks, of masses M = 2.1 kg and 2M are connected to each other and to a spring of spring constant k = 215 N/m that has one end fixed. The horizontal surface and the pulley are frictionless,... even though mass and weight are not the same, your weight is uniquely determined by your mass, so you can compare weights by comparing masses – if object B has twice the mass as object A, it also weights twice as much. A mass of 1 kg has a weight of mg (1 kg) (10 m/s2) = 10 N. If the mass of the spring can be neglected, a body of mass 2M, suspended from the same spring, oscillates with a period of . A) T/2 B) C) T D) E) 2T . Answer: D. A mass of 2.00 kg suspended from a spring 100 cm long is pulled down 4.00 cm from its equilibrium position and released. Sep 12, 2001 · The rest-mass of an object is numerically equal to its Newtonian inertial mass, though arguably the symbol \(m\) (or the corresponding term “mass”) has different meanings in Newtonian and relativistic physics (see, e.g., Kuhn 1962, p. 101 ff. and Torretti 1990, p. 65 ff.). In the Atwood machine, shown on the diagram, 2 masses M and m are suspended from the pulley, what is the magnitude of the acceleration of the system? F/(5m) In the figure to the right, two boxed of masses m and 4m are in contact with each other on a frictionless surface. Two massless springs, of spring contacts k1 and k2, are hung from a horizontal support. A block of weight 12 N is suspended from the pair of springs, as shown above. When the block is in equilibrium, each spring is stretched an additional 24 cm. Thus, the equivalent spring constant of the two-spring system is 12 N / 24 cm = 0.5 N/cm. The center of mass of the two masses of size M does not move. The unbalanced extra mass m accelerates downward at value a. So the spring balance reading, S, is an amount ma less than in the static case, ie. S = (2M+m)g - ma. This reduces to . S = 4M(M+m)g/(2M+m).For the total mass: F = (M + m 1 + m 2)a For the masses individually: m 1 a = T, m 2 g = T --> a = m 2 g/m 1. F = (M + m 1 + m 2)m 2 g/m 1. Problem: Near the surface of Earth, two masses, m 1 = 1 kg and m 2 = 3 kg, are connected by a string of negligible mass. The masses hang on opposite sides of a pulley with radius R. Ubiquiti extend wifi Jun 19, 2016 · A package of mass m is released from rest at a warehouse loading dock and slides down the 3.0 m high, frictionless chute shown below. Unfortunately, the last package of mass 2m sent down the chute has not been picked up, and is still there. If the packaged stick together, what is their speed after collision? A block of mass 5 Kg is suspended by a string to a ceiling and is at rest. ... In the two blocks of masses m 1 and m 2 and pulley system below, the pulley is frictionless and massless and the string around the pulley is massless. Find an expression of the acceleration when the block are released from rest. ... = g (m 2 - m 1 sin28° - m 1 μk ...Two blocks: M1 of mass 8 kg and M2 of mass 2 kg are connected by a massless rope over a massless pulley. M1 is a distance d = 2 m from the bottom of a frictionless incline at an incline of 30 degrees above the horizontal. Initially both bocks are at rest. The radius of the pulley is r = .25 m and the rope does not slip on the pulley. Find the acceleration of wedge of mass 4m placed on smooth horizontal surface as two blocks of masses m and 2m slide over it. Q. 8 A pulley system is setup as in figure. 1. Two particles of mass 1 kg and 3 kg move towards each other under their mutual force of attraction. No other force acts on them. When the relative velocity of approach of the two particles is 2 m/s, their centre of mass has a velocity of 0.5 m/s. When the relative velocity of approach becomes 3 m/s, the velociy of the centre of mass is 0.75 ... The coefficient of static friction is less than the coefficient of kinetic friction ... Two blocks A and B with masses m and 2m are in contact on a horizontal frictionless surface. A force F is applied to block A. ... A lamp of mass m is suspended from two cables of unequal length. The rope with T2 is shorter than the rope with T1.The two blocks of masses M and 2M shown above initially travel at the same speed v but in opposite directions. They collide and stick together. How much mechanical energy is lost to other forms of energy during the collision? (A) Zero (B) ½ Mv2 (C) ¾ Mv2 (D) 4/3 Mv2 (E) 3/2 Mv2 In this system, a damping factor is neglected for simplicity. The mass of m (kg) is suspended by the spring force. The spring force acting on the mass is given as the product of the spring constant k (N/m) and displacement of mass x (m) according to Hook's law. A motion equation of the mass-spring mechanical system is expressed as Eq. (11.37):
By Radhika Desikan When we think of humans’ exploration of space, we do not necessarily think about plants. However, humans need food (of which plants form a major part) wherever they are, and at present, feeding humans in space is not an easy task, because of the complexity of transporting vast amounts of food into space and making the food in such a way that it can be stored for a long time. These are costly methods necessary for providing food for astronauts. In recent times, scientists have been studying how plants can be grown in space in conditions of microgravity and in extraterrestrial soils, which are poor in nutrients. Gravity is a force of attraction between two objects; for example, Earth pulls you to keep you on the ground. Microgravity is a very small amount of gravity, which allows people or objects to appear weightless, like on the International Space Station. Overcoming the constraints of microgravity and extraterrestrial soils poor in nutrients are two major challenges that need to be solved in order for humans to be self-sustaining in space. [tweetthis hidden_hashtags=”#astronomy”]Fresh food for astronauts may be possible with space-grown plants. Learn how veggies grow in microgravity.[/tweetthis] Mutualism on Earth On Earth, around 95% of land plants do not live alone; they have a close association with a special type of fungus (or mold) called mycorrhiza in the soil. This is a symbiotic association, whereby both the plant and the mycorrhiza benefit from each other’s presence. The mycorrhiza helps the plant by expanding its root system to better absorb minerals and nutrients from soils, while the mycorrhiza obtains sugar and fats from the plant. This mutualistic relationship is beneficial for the plant as it allows it to grow better and yield more seeds when minerals are not abundant in the soil. Plants, like humans, produce hormones as part of their development and in response to stimuli. One type of plant hormone, called strigolactone, is secreted by plants when minerals are in short supply in the soil, thereby stimulating mycorrhiza to branch out more and grow towards the plant. Strigolactones, therefore, promote the symbiotic interaction and aid plant growth when soil conditions are not favorable. Microgravity in Space In a remarkable new study, scientists studied how plants can grow better in conditions that mimic microgravity in space, when they can produce more strigolactones. Liu and colleagues used petunia plants as a model to test plants that belong to the family Solanaceae, which include aubergine, tomato, and potato. When petunia plants were grown under microgravity conditions, the growth of mycorrhiza in the soil was inhibited. However, when the petunia plants overproduced strigolactones, mycorrhiza growth was promoted and the plants were able to grow under microgravity and nutrient-limiting conditions. This study highlights the prospects of future sustainable space farming, by using plant–mycorrhizal interactions with strigolactones as a tool. If soil conditions in extraterrestrial environments are found to be poor in nutrients, this approach may be the answer to long-term feeding of astronauts who need to survive in space. This study was published in the journal NPJ Microgravity. — Radhika Desikan is a plant scientist who has taught plant science for several years, and researched and published on the behavior of plants facing various abiotic and biotic stresses. Radhika recently became interested in plant science outreach to schools and communicating plant science to a younger audience. Liu, G., Bollier, D., Gubeli, C., Peter, N., Arnold, P., Egli, M., & Borghi, L. (2018). Simulated microgravity and the antagonistic influence of strigolactone on plant nutrient uptake in low nutrient conditions. NPJ Microgravity, 4: 20. doi:10.1038/s41526-018-0054-z The featured image for this article is a view from the International Space Station by NASA via Flickr. GotScience Magazine is published by the nonprofit Science Connected, is made possible by donations from readers like you. You can support open-access sciencenews – and it only takes a minute. Donate now.
History of Los Angeles Old Los Angeles The written history of Los Angeles city and county began with a Spanish colony town that was populated by 11 descendants of Spanish families known as "Los Pobladores". They established a settlement in Southern California that changed little in the three decades after 1848, when California became part of the United States. Much greater changes came from the completion of the Santa Fe railroad line from Chicago to Los Angeles in 1885. “Overlanders” flooded in, namely white Protestants from the Lower Midwest and South. Los Angeles had a strong economic base in farming, oil, tourism, real estate and movies. It grew rapidly with many suburban areas inside and outside the city limits. Hollywood made the city world-famous, and World War II brought new industry, especially high-tech aircraft construction. Politically the city was moderately conservative, with a weak labor union sector. Since the 1960s, growth has slowed—and traffic delays have become famous. Los Angeles was a pioneer in freeway development as the public transit system deteriorated. New arrivals, especially from Mexico and Asia, have transformed the demographic base since the 1960s. Old industries have declined, including farming, oil, military and aircraft, but tourism, entertainment and high tech remain strong. Part of a series on the |History of California| - 1 Early history - 2 Spanish era: 1769-1821 - 3 Mexican Era: 1821-1848 - 4 Transitional era: 1848-1870 - 5 Industrial expansion and growth: 1870-1913 - 6 Boom town: 1913–1941 - 7 World War II: 1941–1945 - 8 Postwar: Baby boomers - 9 1950–2000 - 10 Population history - 11 Ethnic communities - 12 See also - 13 References - 14 Bibliography - 15 External links By 3000 B.C., the area was occupied by the Hokan-speaking people of the Milling Stone Period who fished, hunted sea mammals, and gathered wild seeds. They were later replaced by migrants — possibly fleeing drought in the Great Basin — who spoke a Uto-Aztecan language called Tongva. The Tongva people called the Los Angeles region Yaa in Tongva. By the time of the arrival of the Spanish in the 18th century A.D., there were 250,000 to 300,000 native people in California and 5,000 in the Los Angeles basin. Since contact with Europeans, the people in what became Los Angeles were known as Gabrielinos and Fernandeños, after the missions associated with them. The land occupied and used by the Gabrielinos covered about 4,000 square miles. It included the enormous floodplain drained by the Los Angeles and San Gabriel rivers and the southern Channel Islands, including the Santa Barbara, San Clemente, Santa Catalina, and San Nicholas Islands. They were part of a sophisticated group of trading partners that included the Chumash to the west, the Cahuilla and Mojave to the east, and the Juaneños and Luiseños to the south. Their trade extended to the Colorado River and included slavery. The lives of the Gabrielinos were governed by a set of religious and cultural practices that included belief in creative supernatural forces. They worshipped Chinigchinix, a creator god, and Chukit, a female virgin god. Their Great Morning Ceremony was based on a belief in the afterlife. In a purification ritual, they drank tolguache, a hallucinogenic made from jimson weed and salt water. Their language was called Kizh or Kij, and they practiced cremation. Generations before the arrival of the Europeans, the Gabrielinos had identified and lived in the best sites for human occupation. The survival and success of Los Angeles depended greatly on the presence of a nearby and prosperous Gabrielino village called Yaanga. Its residents provided the colonists with seafood, fish, bowls, pelts, and baskets. For pay, they dug ditches, hauled water, and provided domestic help. They often intermarried with the Mexican colonists. Spanish era: 1769-1821 Plans for the pueblo Although Los Angeles was a town that was founded by Mexican families from Sonora, it was the Spanish governor of California who named the settlement. In 1777, governor Felipe de Neve toured Alta California and decided to establish civic pueblos for the support of the military presidios. The new pueblos reduced the secular power of the missions by reducing the dependency of the military on them. At the same time, they promoted the development of industry and agriculture. Neve identified Santa Barbara, San Jose, and Los Angeles as sites for his new pueblos. His plans for them closely followed a set of Spanish city-planning laws contained in the Laws of the Indies promulgated by King Philip II in 1573. Those laws were responsible for laying the foundations of the largest cities in the region, including Los Angeles, San Francisco, Tucson, and San Antonio—as well as Sonoma, Monterey, Santa Fe, San Jose, and Laredo. The Spanish system called for an open central plaza, surrounded by a fortified church, administrative buildings, and streets laid out in a grid, defining rectangles of limited size to be used for farming (suertes) and residences (solares). It was in accordance with such precise planning—specified in the Law of the Indies—that Governor Neve founded the pueblo of San Jose de Guadalupe, California's first municipality, on the great plain of Santa Clara on 29 November 1777. The Los Angeles Pobladores ("townspeople") is the name given to the 44 original settlers, 22 adults and 22 children from Sonora, who founded the town. In December 1777, Viceroy Antonio María de Bucareli y Ursúa and Commandant General Teodoro de Croix gave approval for the founding of a civic municipality at Los Angeles and a new presidio at Santa Barbara. Croix put the California lieutenant governor Fernando Rivera y Moncada in charge of recruiting colonists for the new settlements. He was originally instructed to recruit 55 soldiers, 22 settlers with families and 1,000 head of livestock that included horses for the military. After an exhausting search that took him to Mazatlán, Rosario, and Durango, Rivera y Moncada only recruited 12 settlers and 45 soldiers. Like the people of most towns in New Spain they were a mix of Indian and Spanish backgrounds. The Quechan Revolt killed 95 settlers and soldiers, including Rivera y Moncada. In his Reglamento, the newly baptized Indians were no longer to reside in the mission but live in their traditional rancherías (villages). Neve's new plans for the Indians' role in his new town drew instant disapproval from the mission priests. Zúñiga's party arrived at the mission on 18 July 1781. Because they had arrived with smallpox, they immediately were quarantined a short distance away from the mission. Members of the other party arrived at different times by August. They made their way to Los Angeles and probably received their land before September. The official date for the founding of the city is September 4, 1781. The families had arrived from New Spain earlier in 1781, in two groups, and some of them had most likely been working on their assigned plots of land since the early summer. The name first given to the settlement is debated. Historian Doyce B. Nunis has said that the Spanish named it "El Pueblo de la Reina de los Angeles" ("The Town of the Queen of the Angels"). For proof, he pointed to a map dated 1785, where that phrase was used. Frank Weber, the diocesan archivist, replied, however, that the name given by the founders was "El Pueblo de Nuestra Señora de los Angeles de Porciuncula", or "the town of Our Lady of the Angels of Porciuncula." and that the map was in error. The town grew as soldiers and other settlers came into town and stayed. In 1784 a chapel was built on the Plaza. The pobladores were given title to their land two years later. By 1800, there were 29 buildings that surrounded the Plaza, flat-roofed, one-story adobe buildings with thatched roofs made of tule. By 1821, Los Angeles had grown into a self-sustaining farming community, the largest in Southern California. Each settler received four rectangles of land, suertes, for farming, two irrigated plots and two dry ones. When the settlers arrived, the Los Angeles floodplain was heavily wooded with willows and oaks. The Los Angeles river flowed all year. Wildlife was plentiful, including deer, antelope, and black bears, and even an occasional grizzly bear. There were abundant wetlands and swamps. Steelhead and salmon swam the rivers. The first settlers built a water system consisting of ditches (zanjas) leading from the river through the middle of town and into the farmlands. Indians were employed to haul fresh drinking water from a special pool farther upstream. The city was first known as a producer of fine wine grapes. The raising of cattle and the commerce in tallow and hides came later. Because of the great economic potential for Los Angeles, the demand for Indian labor grew rapidly. Yaanga began attracting Indians from the islands and as far away as San Diego and San Luis Obispo. The village began to look like a refugee camp. Unlike the missions, the pobladores paid Indians for their labor. In exchange for their work as farm workers, vaqueros, ditch diggers, water haulers, and domestic help; they were paid in clothing and other goods as well as cash and alcohol. The pobladores bartered with them for prized sea-otter and seal pelts, sieves, trays, baskets, mats, and other woven goods. This commerce greatly contributed to the economic success of the town and the attraction of other Indians to the city. During the 1780s, San Gabriel Mission became the object of an Indian revolt. The mission had expropriated all the suitable farming land; the Indians found themselves abused and forced to work on lands that they once owned. A young Indian healer, Toypurina, began touring the area, preaching against the injustices suffered by her people. She won over four rancherías and led them in an attack on the mission at San Gabriel. The soldiers were able to defend the mission, and arrested 17, including Toypurina. In 1787, Governor Pedro Fages outlined his "Instructions for the Corporal Guard of the Pueblo of Los Angeles." The instructions included rules for employing Indians, not using corporal punishment, and protecting the Indian rancherías. As a result, Indians found themselves with more freedom to choose between the benefits of the missions and the pueblo-associated rancherías. In 1795, Sergeant Pablo Cota led an expedition from the Simi Valley through the Conejo-Calabasas region and into the San Fernando Valley. His party visited the rancho of Francisco Reyes. They found the local Indians hard at work as vaqueros and caring for crops. Padre Vincente de Santa Maria was traveling with the party and made these observations: All of pagandom (Indians) is fond of the pueblo of Los Angeles, of the rancho of Reyes, and of the ditches (water system). Here we see nothing but pagans, clad in shoes, with sombreros and blankets, and serving as muleteers to the settlers and rancheros, so that if it were not for the gentiles there were neither pueblos nor ranches. These pagan Indians care neither for the missions nor for the missionaries. Not only economic ties but also marriage drew many Indians into the life of the pueblo. In 1784, only three years after the founding, the first recorded marriages in Los Angeles took place. The two sons of settler Basilio Rosas, Maximo and José Carlos, married two young Indian women, María Antonia and María Dolores. The construction on the Plaza of La Iglesia de Nuestra Señora de Los Ángeles took place between 1818 and 1822, much of it with Indian labor. The new church completed Governor Neve's planned transition of authority from mission to pueblo. The angelinos no longer had to make the bumpy 11-mile (18 km) ride to Sunday Mass at Mission San Gabriel. Mexican Era: 1821-1848 Mexico's independence from Spain in 1821 was celebrated with great festivity throughout Alta California. No longer subjects of the king, people were now ciudadanos, citizens with rights under the law. In the plazas of Monterey, Santa Barbara, Los Angeles, and other settlements, people swore allegiance to the new government, the Spanish flag was lowered, and the flag of independent Mexico raised. Independence brought other advantages, including economic growth. There was a corresponding increase in population as more Indians were assimilated and others arrived from America, Europe, and other parts of Mexico. Before 1820, there were just 650 people in the pueblo. By 1841, the population nearly tripled to 1,680. Secularization of the missions During the rest of the 1820s, the agriculture and cattle ranching expanded as did the trade in hides and tallow. The new church was completed, and the political life of the city developed. Los Angeles was separated from Santa Barbara administration. The system of ditches which provided water from the river was rebuilt. Trade and commerce further increased with the secularization of the California missions by the Mexican Congress in 1833. Extensive mission lands suddenly became available to government officials, ranchers, and land speculators. The governor made more than 800 land grants during this period, including a grant of over 33,000-acres in 1839 to Francisco Sepúlveda which was later developed as the westside of Los Angeles. Much of this progress, however, bypassed the Indians of the traditional villages who were not assimilated into the mestizo culture. Being regarded as minors who could not think for themselves, they were increasingly marginalized and relieved of their land titles, often by being drawn into debt or alcohol. In 1834, Governor Pico was married to Maria Ignacio Alvarado in the Plaza church. It was attended by the entire population of the pueblo, 800 people, plus hundreds from elsewhere in Alta California. In 1835, the Mexican Congress declared Los Angeles a city, making it the official capital of Alta California. It was now the region's leading city. The same period also saw the arrival of many foreigners from the United States and Europe. They played a pivotal role in the U.S. takeover. Early California settler John Bidwell included several historical figures in his recollection of people he knew in March, 1845. It then had probably two hundred and fifty people, of whom I recall Don Abel Stearns, John Temple, Captain Alexander Bell, William Wolfskill, Lemuel Carpenter, David W. Alexander; also of Mexicans, Pio Pico (governor), Don Juan Bandini, and others. Upon arriving in Los Angeles in 1831, Jean-Louis Vignes bought 104 acres (0.42 km2) of land located between the original Pueblo and the banks of the Los Angeles River. He planted a vineyard and prepared to make wine. He named his property El Aliso after the centuries-old tree found near the entrance. The grapes available at the time, of the Mission variety, were brought to Alta California by the Franciscan Brothers at the end of the 18th century. They grew well and yielded large quantities of wine, but Jean-Louis Vignes was not satisfied with the results. Therefore, he decided to import better vines from Bordeaux: Cabernet Sauvignon, Cabernet Franc, and Sauvignon blanc. In 1840, Jean-Louis Vignes made the first recorded shipment of California wine. The Los Angeles market was too small for his production, and he loaded a shipment on the Monsoon, bound for Northern California. By 1842, he made regular shipments to Santa Barbara, Monterey and San Francisco. By 1849, El Aliso, was the most extensive vineyard in California. Vignes owned over 40,000 vines and produced 150,000 bottles, or 1,000 barrels, per year. In May 1846, the Mexican–American War started. Because of Mexico's inability to defend its northern territories, California was exposed to invasion. On August 13, 1846, Commodore Robert F. Stockton, accompanied by John C. Frémont, seized the town; Governor Pico had fled to Mexico. From Stockton and Frémont until late 1849, all of California had a military governor. After three weeks of occupation, Stockton left, leaving Lieutenant Archibald H. Gillespie in charge. Subsequent dissatisfaction with Gillespie and his troops led to an uprising. A force of 300 locals drove the Americans out, ending the first phase of the Battle of Los Angeles. Further small skirmishes took place. Stockton regrouped in San Diego and marched north with six hundred troops while Frémont marched south from Monterey with 400 troops. After a few skirmishes outside the city, the two forces entered Los Angeles, this time without bloodshed. Andrés Pico was in charge; he signed the so-called Treaty of Cahuenga (it was not a treaty) on 13 January 1847, ending the California phase of the Mexican–American War. The Treaty of Guadalupe Hidalgo, signed on 2 February 1848, ended the war and ceded California to the U.S. Transitional era: 1848-1870 According to historian Mary P. Ryan, "The U.S. army swept into California with the surveyor as well as the sword and quickly translated Spanish and Mexican practices into cartographic representations." Under colonial law, land held by grantees was not disposable. It reverted to the government. It was determined that under U.S. property law, lands owned by the city were disposable. Also, the diseños (property sketches) held by residents did not secure title in an American court. California's new military governor Bennett C. Riley ruled that land could not be sold that was not on a city map. In 1849, Lieutenant Edward Ord surveyed Los Angeles to confirm and extend the streets of the city. His survey put the city into the real-estate business, creating its first real-estate boom and filling its treasury. Street names were changed from Spanish to English. Further surveys and street plans replaced the original plan for the pueblo with a new civic center south of the Plaza and a new use of space. The fragmentation of Los Angeles real estate on the Anglo-Mexican axis had begun. Under the Spanish system, the residences of the power-elite clustered around the Plaza in the center of town. In the new American system, the power elite resided in the outskirts. The emerging minorities, including the Chinese, Italians, French, and Russians, joined with the Mexicans near the Plaza. In 1848, the gold discovered in Coloma first brought thousands of miners from Sonora in northern Mexico on the way to the gold fields. So many of them settled in the area north of the Plaza that it came to be known as Sonoratown. During the Gold Rush years in northern California, Los Angeles became known as the "Queen of the Cow Counties" for its role in supplying beef and other foodstuffs to hungry miners in the north. Among the cow counties, Los Angeles County had the largest herds in the state followed closely by Santa Barbara and Monterey Counties. With the temporary absence of a legal system, the city quickly was submerged in lawlessness. Many of the New York regiment disbanded at the end of the war and charged with maintaining order were thugs and brawlers. They roamed the streets joined by gamblers, outlaws, and prostitutes driven out of San Francisco and mining towns of the north by Vigilance Committees or lynch mobs. Los Angeles came to be known as the "toughest and most lawless city west of Santa Fe." Some of the residents resisted the new Anglo powers by resorting to banditry against the gringos. In 1856, Juan Flores threatened Southern California with a full-scale revolt. He was hanged in Los Angeles in front of 3,000 spectators. Tiburcio Vasquez, a legend in his own time among the Mexican-born population for his daring feats against the Anglos, was captured in present-day Santa Clarita, California on May 14, 1874. He was found guilty of two counts of murder by a San Jose jury in 1874, and was hanged there in 1875. Los Angeles had several active "Vigilance Committees" during that era. Between 1850 and 1870, mobs carried out approximately 35 lynchings of Mexicans—more than four times the number that occurred in San Francisco. Los Angeles was described as "undoubtedly the toughest town of the entire nation." The homicide rate between 1847 and 1870 averaged 158 per 100,000 (13 murders per year), which was 10 to 20 times the annual murder rates for New York City during the same period. The fear of Mexican violence and the racially motivated violence inflicted on them further marginalized the Mexicans, greatly reducing their economic and political opportunities. John Gately Downey, the seventh governor of California was sworn into office on January 14, 1860, thereby becoming the first governor from Southern California. Governor Downey was born and raised in Castlesampson, County Roscommon, Ireland, and came to Los Angeles in 1850. He was responsible for keeping California in the Union during the Civil War. Plight of the Indians In 1836, the Indian village of Yaanga was relocated near the future corner of Commercial and Alameda Streets. In 1845, it was relocated again to present-day Boyle Heights. With the coming of the Americans, disease took a great toll among Indians. Self-employed Indians were not allowed to sleep over in the city. They faced increasing competition for jobs as more Mexicans moved into the area and took over the labor force. Those who loitered or were drunk or unemployed were arrested and auctioned off as laborers to those who paid their fines. They were often paid for work with liquor, which only increased their problems. Los Angeles was incorporated as an American city on April 4, 1850. Five months later, California was admitted into the Union. Although the Treaty of Guadalupe Hidalgo required the U.S. to grant citizenship to the Indians of former Mexican territories, the U.S. did not get around to doing that for another 80 years. The Constitution of California deprived Indians of any protection under the law, considering them as non-persons. As a result, it was impossible to bring an Anglo to trial for killing an Indian or forcing Indians off their properties. Anglos concluded that the "quickest and best way to get rid of (their) troublesome presence was to kill them off, (and) this procedure was adopted as a standard for many years." When New England author and Indian-rights activist Helen Hunt Jackson toured the Indian villages of Southern California in 1883, she was appalled by the racism of the Anglos living there. She wrote that they treated Indians worse than animals, hunted them for sport, robbed them of their farmlands, and brought them to the edge of extermination. While Indians were depicted by whites as lazy and shiftless, she found most of them to be hard-working craftsmen and farmers. Jackson's tour inspired her to write her 1884 novel Ramona, which she hoped would give a human face to the atrocities and indignities suffered by the Indians in California. And it did. The novel was enormously successful, inspiring four movies and a yearly pageant in Hemet, California. Many of the Indian villages of Southern California survived because of her efforts, including Morongo, Cahuilla, Soboba, Temecula, Pechanga, and Warner Hot Springs. Remarkably, the Gabrielino Indians, now called Tongva, also survived. in 2006, the Los Angeles Times reported that there were 2,000 of them still living in Southern California. Some were organizing to protect burial and cultural sites. Others were trying to win federal recognition as a tribe to operate a casino. The city's first newspaper, Star of Los Angeles, was a bilingual publication which began its run in 1851. Industrial expansion and growth: 1870-1913 In the 1870s, Los Angeles was still little more than a village of 5,000. By 1900, there were over 100,000 occupants of the city. Several men actively promoted Los Angeles, working to develop it into a great city and to make themselves rich. Angelenos set out to remake their geography to challenge San Francisco with its port facilities, railway terminal, banks and factories. The Farmers and Merchants Bank of Los Angeles was the first incorporated bank in Los Angeles, founded in 1871 by John G. Downey and Isaias W. Hellman. Wealthy easterners who came as tourists recognized the growth opportunities and invested heavily in the region. During the 1880s and 1890s, the central business district (CBD) grew along Main and Spring streets towards Second Street and beyond. Much of Los Angeles County was farmland, with an emphasis on cattle, dairy products, vegetables and citrus fruits. After 1945, most of the farmland was converted into housing tracts. The town continued to grow at a moderate pace. Railroads finally arrived to connect with the Central Pacific and San Francisco in 1876. The impact was small. Much greater was the impact of the Santa Fe system (through its subsidiary California Southern Railroad) in 1885. The Santa Fe and Southern Pacific lines provided direct connections to the East, competed vigorously for business with much lower rates, and stimulated economic growth. Tourists poured in by the thousands every week, and many planned on returning or resettling. The city still lacked a modern harbor. Phineas Banning excavated a channel out of the mud flats of San Pedro Bay leading to Wilmington in 1871. Banning had already laid track and shipped in locomotives to connect the port to the city. Harrison Gray Otis, founder and owner of the Los Angeles Times, and a number of business colleagues embarked on reshaping southern California by expanding that into a harbor at San Pedro using federal dollars. This put them at loggerheads with Collis P. Huntington, president of the Southern Pacific Railroad Company and one of California's "Big Four" investors in the Central Pacific and Southern Pacific. (The "Big Four" are sometimes numbered among the "robber barons" of the Gilded Age). The line reached Los Angeles in 1876 and Huntington directed it to a port at Santa Monica, where the Long Wharf was built. In April 1872, John G. Downey went to San Francisco and was successful in representing Los Angeles in discussions with Collis Huntington concerning Los Angeles's efforts to bring the Southern Pacific Railroad through Los Angeles. In 1876 the Newhall railroad tunnel located 27 miles (43 km) north of Los Angeles between the town of San Fernando and Lyons Station Stagecoach Stop (now Newhall) was completed, providing the final link from San Francisco to Los Angeles for the railroad. The 6,940-foot-long railroad tunnel (2,115.3 m) took a year and a half to complete. More than 1,500 mostly Chinese laborers took part in the tunnel construction, which began at the south end of the mountain on March 22, 1875. Many of them had prior experience working on Southern Pacific's located tunnels in the Tehachapi Pass. Due to the sandstone composition of the mountain that was saturated with water and oil, frequent cave-ins occurred and the bore had to be constantly shored up by timbers during excavation. The initial location for the north end of the tunnel near Newhall was abandoned due to frequent cave-ins caused by oil-soaked rock. The north end of the tunnel excavation commenced in June 1875. Water was a constant problem during construction and pumps were utilized to keep the tunnel from flooding. Workers digging from both the north and south ends of the tunnel came face to face on July 14, 1876. The bores from each end were only a half inch out of line with dimensions of 22 feet (6.7 m) high, 16.5 feet (5.0 m) wide at the bottom and over 18 feet (5.5 m) at the shoulders. Track was laid in place soon after the tunnel dig was completed and the first train passed through on August 12, 1876. On September 4, Charles Crocker notified Southern Pacific that the track had been completed on the route between San Francisco and Los Angeles. The San Pedro forces eventually prevailed (though it required Banning and Downey to turn their railroad over to the Southern Pacific). Work on the San Pedro breakwater began in 1899 and was finished in 1910. Otis Chandler and his allies secured a change in state law in 1909 that allowed Los Angeles to absorb San Pedro and Wilmington, using a long, narrow corridor of land to connect them with the rest of the city. The debacle of the future Los Angeles harbor was termed the Free Harbor Fight. In 1898, Henry Huntington and a San Francisco syndicate led by Isaias W. Hellman purchased five trolley lines, consolidated them into the Los Angeles Railway (the 'yellow cars') and two years later founded the Pacific Electric Railway (the 'red cars'). Los Angeles Railway served the city and the Pacific Electric Railway served the rest of the county. At its peak, the Pacific Electric was the largest electrically operated interurban railway in the world. Over 1,000 miles (1,600 km) of tracks connected Los Angeles with Hollywood, Pasadena, San Pedro, Venice Beach, Santa Monica, Pomona, San Bernardino, Long Beach, Santa Ana, Huntington Beach, and other points and was recognized as best public transportation system in the world. Oil was discovered by Edward L. Doheny in 1892, near the present location of Dodger Stadium. The Los Angeles City Oil Field was the first of many fields in the basin to be exploited, and in 1900 and 1902, respectively, the Beverly Hills Oil Field and Salt Lake Oil Field were discovered a few miles west of the original find. Los Angeles became a center of oil production in the early 20th century, and by 1923, the region was producing one-quarter of the world's total supply; it is still a significant producer, with the Wilmington Oil Field having the fourth-largest reserves of any field in California. At the same time that the L.A. Times was spurring enthusiasm for the expansion of Los Angeles, it was trying to turn it into a union-free or open shop town. Fruit growers and local merchants who had opposed the Pullman strike in 1894 subsequently formed the Merchants and Manufacturers Association (M & M) to support the L.A. Times anti-union campaign. The California labor movement, with its strength concentrated in San Francisco, largely had ignored Los Angeles for years. It changed, in 1907, however, when the American Federation of Labor decided to challenge the open shop of "Otis Town." In 1909, the city fathers placed a ban on free speech from public streets and private property except for the Plaza. Locals had claimed that it had been an Open Forum forever. The area was of particular concern to the owners of the L.A. Times, Harrison Grey Otis and his son-in-law Harry Chandler. This conflict came to a head with the bombing of the Times in 1910.[failed verification] Two months later, the Llewellyin Iron Works near the plaza was bombed. A meeting hastily was called of the Chamber of Commerce and Manufacturers Association. The L.A. Times wrote: "radical and practical matters (were) considered, and steps taken for the adaption of such as are adequate to cope with a situation tardily recognized as the gravest that Los Angeles has ever been called upon to face." The authorities indicted John and James McNamara, both associated with the Iron Workers Union, for the bombing; Clarence Darrow, famed Chicago defense lawyer, represented them. At the same time the McNamara brothers were awaiting trial, Los Angeles was preparing for a city election. Job Harriman, running on the socialist ticket, was challenging the establishment's candidate. Harriman's campaign, however, was tied to the asserted innocence of the McNamaras. But the defense was in trouble: The prosecution not only had evidence of the McNamaras' complicity, but had trapped Darrow in a clumsy attempt to bribe one of the jurors. On December 1, 1911, four days before the final election, the McNamaras entered a plea of guilty in return for prison terms. Harriman lost badly. On Christmas Day, 1913, police attempted to break up an IWW rally of 500 taking place in the Plaza. Encountering resistance, the police waded into the crowd attacking them with their clubs. One citizen was killed. In the aftermath, the authorities attempted to impose martial law in the wake of growing protests. Seventy-three people were arrested in connection with the riots. The city council introduced new measures to control public speaking. The Times scapegoated all foreign elements even calling onlookers and taco vendors as "cultural subversives." The open shop campaign continued from strength to strength, although not without meeting opposition from workers. By 1923, the Industrial Workers of the World had made considerable progress in organizing the longshoremen in San Pedro and led approximately 3,000 men to walk off the job. With the support of the L.A. Times, a special "Red Squad" was formed within the Los Angeles Police Department and arrested so many strikers that the city's jails were soon filled. Some 1,200 dock workers were corralled in a special stockade in Griffith Park. The L.A. Times wrote approvingly that "stockades and forced labor were a good remedy for IWW terrorism." Public meetings were outlawed in San Pedro, Upton Sinclair was arrested at Liberty Hill in San Pedro for reading the United States Bill of Rights on the private property of a strike supporter (the arresting officer told him "we'll have none of 'that Constitution stuff'") and blanket arrests were made at union gatherings. The strike ended after members of the Ku Klux Klan and the American Legion raided the IWW Hall and attacked the men, women and children meeting there. The strike was defeated. Los Angeles developed another industry in the early 20th century when movie producers from the East Coast relocated there. These new employers were likewise afraid of unions and other social movements: During Upton Sinclair's campaign for governor of California under the banner of his "End Poverty In California" (EPIC) movement, Louis B. Mayer turned MGM's Culver City studio into the unofficial headquarters of the organized campaign against EPIC. MGM produced fake newsreel interviews with whiskered actors with Russian accents voicing their enthusiasm for EPIC, along with footage focusing on central casting hobos huddled on the borders of California waiting to enter and live off the bounty of its taxpayers once Sinclair was elected. Sinclair lost. Los Angeles also acquired another industry in the years just before World War II: the garment industry. At first devoted to regional merchandise such as sportswear, the industry eventually grew to be the second largest center of garment production in the United States. The immigrants arriving in the city to find jobs sometimes brought the revolutionary zeal and idealism of their homelands. These included anarchists such as Russian Emma Goldman and Ricardo Flores Magón and his brother Enrique of the Partido Liberal Mexicano. They later were joined by the socialist candidate for mayor Job Harriman, Chinese revolutionaries, the novelist Upton Sinclair, "Wobblies" (members of the Industrial Workers of the World, the IWW), and Socialist and Communist labor organizers such as the Japanese-American Karl Yoneda and the Russian-born New Yorker Meyer Baylin. The socialists were the first to set up a soapbox in the Plaza, which served as the location of union rallies and protests and riots as the police attempted to break up meetings. Unions began to make progress in organizing these workers as the New Deal arrived in the 1930s. An influential strike was the Los Angeles Garment Workers Strike of 1933, one of the first strikes in which Mexican immigrant workers played a prominent role for union recognition. The unions made even greater gains in the war years, as Los Angeles grew further. Today, the ethnic makeup of the city and the dominance of progressive political views among its voters have made Los Angeles a strong union town. However, many garment workers in central LA, most of whom are Mexican immigrants, still work in sweat shop conditions. Battle of the Los Angeles River The Los Angeles River flowed clear and fresh all year, supporting 45 Gabrielino villages in the area. The source of the river was the aquifer under the San Fernando Valley, supplied with water from the surrounding mountains. The rising of the underground bedrock at the Glendale Narrows (near today's Griffith Park) squeezed the water to the surface at that point. Then, through much of the year, the river emerged from the valley to flow across the floodplain 20 miles (32 km) to the sea. The area also provided other streams, lakes, and artesian wells. Early settlers were more than a little discouraged by the region's diverse and unpredictable weather. They watched helplessly as long droughts weakened and starved their livestock, only to be drowned and carried off by ferocious storms. During the years of little rain, people built too close to the riverbed, only to see their homes and barns later swept to sea during a flood. The location of the Los Angeles Plaza had to be moved twice because of previously having been built too close to the riverbed. Worse, floods changed the river's course. When the settlers arrived, the river joined Ballona Creek to discharge in Santa Monica Bay. A fierce storm in 1835 diverted its course to Long Beach, where it stays today. Early citizens could not even maintain a footbridge over the river from one side of the city to the other. After the American takeover, the city council authorized spending of $20,000 for a contractor to build a substantial wooden bridge across the river. The first storm to come along dislodged the bridge, used it as a battering ram to break through the embankment, and scattered its timbers all the way to the sea. Some of the most concentrated rainfall in the history of the United States has occurred in the San Gabriel Mountains north of Los Angeles and Orange Counties. On April 5, 1926, a rain gauge in the San Gabriels collected one inch in one minute. In January, 1969, more water fell on the San Gabriels in nine days than New York City sees in a year. In February 1978, almost a foot of rain fell in 24 hours, and, in one blast, an inch and a half in five minutes. This storm caused massive debris flows throughout the region, one of them unearthing the corpses in the Verdugo Hills Cemetery and depositing them in the town below. Another wiped out the small town of Hidden Springs in a tributary of the Big Tujunga River, killing 13 people. The greatest daily rainfall recorded in California was 26.12 inches on January 23, 1943 at Hoegees near Mt. Wilson in the San Gabriel Mountains. Fifteen other stations reported over 20 inches in two days from the same storm. Forty-five others reported 70% of the average annual rainfall in two days. Quibbling between city and county governments delayed any response to the flooding until a massive storm in 1938 flooded Los Angeles and Orange counties. The federal government stepped in. To transfer floodwater to the sea as quickly as possible, the Army Corps of Engineers paved the beds of the river and its tributaries. The corps also built several dams and catchment basins in the canyons along the San Gabriel Mountains to reduce the debris flows. It was an enormous project, taking years to complete. Today, the Los Angeles River functions mainly as a flood control. A drop of rain falling in the San Gabriel Mountains will reach the sea faster than an auto can drive. During today's rainstorms, the volume of the Los Angeles River at Long Beach can be as large as the Mississippi River at St. Louis. The drilling of wells and pumping of water from the San Fernando Valley aquifer dried up the river by the 1920s. By 1980, the aquifer was supplying drinking water for 800,000 people. In that year, it was discovered that the aquifer had been contaminated. Many wells were shut down, as the area qualified as a Superfund site. Water from a distance For its first 120 years, the Los Angeles River supplied the town with ample water for homes and farms. It was estimated that the annual flow could have support a town of 250,000 people—if the water had been managed right. But Angelenos were among the more profligate users of water in the world. In the semi-arid climate, they were forever watering their lawns, gardens, orchards, and vineyards. Later, they needed more to support the growth of commerce and manufacturing. By the beginning of the 20th century, the town realized it quickly would outgrow its river and would need new sources of water. Legitimate concerns about water supply were exploited to gain backing for a huge engineering and legal effort to bring more water to the city and allow more development. The city fathers had their eyes on the Owens River, about 250 miles (400 km) northeast of Los Angeles in Inyo County, near the Nevada state line. It was a permanent stream of fresh water fed by the melted snows of the eastern Sierra Nevada. It flowed through the Owens River Valley before emptying into the shallow, saline Owens Lake, where it evaporated. Sometime between 1899 and 1903, Harrison Gray Otis and his son-in-law successor, Harry Chandler, engaged in successful efforts at buying cheap land on the northern outskirts of Los Angeles in the San Fernando Valley. At the same time, they enlisted the help of William Mulholland, chief engineer of the Los Angeles Water Department (later the Los Angeles Department of Water and Power or LADWP), and J.B. Lippencott, of the United States Reclamation Service. Lippencott performed water surveys in the Owens Valley for the Service while secretly receiving a salary from the City of Los Angeles. He succeeded in persuading Owens Valley farmers and mutual water companies to pool their interests and surrender the water rights to 200,000 acres (800 km²) of land to Fred Eaton, Lippencott's agent and a former mayor of Los Angeles. Lippencott then resigned from the Reclamation Service, took a job with the Los Angeles Water Department as assistant to Mulholland, and turned over the Reclamation Service maps, field surveys and stream measurements to the city. Those studies served as the basis for designing the longest aqueduct in the world. By July 1905, the Times began to warn the voters of Los Angeles that the county would soon dry up unless they voted bonds for building the aqueduct. Artificial drought conditions were created when water was run into the sewers to decrease the supply in the reservoirs and residents were forbidden to water their lawns and gardens. On election day, the people of Los Angeles voted for $22.5 million worth of bonds to build an aqueduct from the Owens River and to defray other expenses of the project. With this money, and with a special Act of Congress allowing cities to own property outside their boundaries, the City acquired the land that Eaton had acquired from the Owens Valley farmers and started to build the aqueduct. On the occasion of the opening of the Los Angeles Aqueduct on November 5, 1913 Mullholland's entire speech was five words: "There it is. Take it." Boom town: 1913–1941 Hollywood has been synonymous worldwide with the film industry for over a hundred years. It was incorporated as the City of Hollywood in 1903 but merged into LA in 1910. In the 1900s movie makers from New York found the sunny, temperate weather more suitable for year-round location shooting. It boomed into the cinematic heart of the United States, and has been the home and workplace of actors, directors and singers that range from small and independent to world-famous, leading to the development of related television and music industries. Swimming pool desegregation An end to racial segregation in municipal swimming pools was ordered in summer 1931 by a Superior Court Judge after Ethel Prioleau sued the city, complaining that she as a Negro was not allowed to use the pool in nearby Exposition Park but had to travel 3.6 miles to the designated "negro swimming pool." Summer Olympics Los Angeles hosted the 1932 Summer Olympics. The Los Angeles Memorial Coliseum, which had opened in May, 1923 with a seating capacity of 76,000, was enlarged to accommodate over 100,000 spectators for Olympic events. It is still in use by the USC Trojans football team. Olympic Boulevard, a major thoroughfare, honors the occasion. Annexations and consolidations The City of Los Angeles mostly remained within its original 28 square-mile (73 km²) landgrant until the 1890s. The original city limits are visible even today in the layout of streets that changes from a north-south pattern outside of the original land grant to a pattern that is shifted roughly 15 degrees east of the longitude in and closely around the area now known as Downtown. The first large additions to the city were the districts of Highland Park and Garvanza to the north, and the South Los Angeles area. In 1906, the approval of the Port of Los Angeles and a change in state law allowed the city to annex the Shoestring, or Harbor Gateway, a narrow and crooked strip of land leading from Los Angeles south towards the port. The port cities of San Pedro and Wilmington were added in 1909 and the city of Hollywood was added in 1910, bringing the city up to 90 square miles (233 km²) and giving it a vertical "barbell" shape. Also added that year was Colegrove, a suburb west northwest of the city near Hollywood; Cahuenga, a township northwest of the former city limits; and a part of Los Feliz was annexed to the city. The opening of the Los Angeles Aqueduct provided the city with four times as much water as it required, and the offer of water service became a powerful lure for neighboring communities. The city, saddled with a large bond and excess water, locked in customers through annexation by refusing to supply other communities. Harry Chandler, a major investor in San Fernando Valley real estate, used his Los Angeles Times to promote development near the aqueduct's outlet. By referendum of the residents, 170 square miles (440 km²) of the San Fernando Valley, along with the Palms district, were added to the city in 1915, almost tripling its area, mostly towards the northwest. Over the next 17 years. dozens of additional annexations brought the city's area to 450 square miles (1,165 km²) in 1932. (Numerous small annexations brought the total area of the city up to 469 square miles (1,215 km²) as of 2004.) Most of the annexed communities were unincorporated towns but 10 incorporated cities were consolidated into Los Angeles: Wilmington (1909), San Pedro (1909), Hollywood (1910), Sawtelle (1922), Hyde Park (1923), Eagle Rock (1923), Venice (1925), Watts (1926), Barnes City (1927), and Tujunga (1932). Civic corruption and police brutality The downtown business interests, always eager to attract business and investment to Los Angeles, were also eager to distance their town from the criminal underworld that defined the stories of Chicago and New York. In spite of their concerns, massive corruption in City Hall and the Los Angeles Police Department (LAPD)—and the fight against it—were dominant themes in the city's story from early 20th-century to the 1950s. In the 1920s, for example, it was common practice for the city's mayor, councilmen, and attorneys to take contributions from madams, bootleggers, and gamblers. The top aide of the mayor was involved with a protection racket. Thugs with eastern-Mafia connections were involved in often violent conflicts over bootlegging and horse-racing turf. The mayor's brother was selling jobs in the LAPD. In 1933, the new mayor Frank Shaw started giving contracts without competitive bids and paying city employees to favor crony contractors. The city's Vice Squad functioned citywide as the enforcer and collector of the city's organized crime, with revenues going to the pockets of city officials right up to the mayor. In 1937, the owner of downtown's Clifton's Cafeteria, Clifford Clinton led a citizen's campaign to clean up city hall. He and other reformers served on a Grand Jury investigating the charges of corruption. In a minority report, the reformers wrote: A portion of the underworld profits have been used in financing campaigns [of] ... city and county officials in vital positions ... [While] the district attorney's office, sheriff's office, and Los Angeles Police Department work in complete harmony and never interfere with ... important figures in the underworld. The police Intelligence Squad spied on anyone even suspected of criticizing the police. They included journalist Carey McWilliams, the district attorney, Judge Bowron, and two of the county supervisors. The persistent courage of Clinton, Superior Court Judge, later Mayor, Fletcher Bowron, and former L.A.P.D. detective Harry Raymond turned the tide. The police became so nervous that the Intelligence Squad blew up Raymond's car and nearly killed him. The public was so enraged by the bombing that it quickly voted Shaw out of office, one of the first big-city recalls in the country's history. The head of the intelligence squad was convicted and sentenced to two years to life. Police Chief James Davis and 23 other officers were forced to resign. Fletcher Bowron replaced Shaw as mayor in 1938 to preside over one of the more dynamic periods in the history of the city. His 'Los Angeles Urban Reform Revival brought major changes to the government of Los Angeles. In 1950, he appointed William H. Parker as chief of police. Parker pushed for more independence from political pressures that enabled him to create a more professionalized police force. The public supported him and voted in charter changes that isolated the police department from the rest of government. Through the 1960s, the LAPD was promoted as one of the more efficient departments in the world. But Parker's administration increasingly was charged with police brutality—resulting from his recruiting of officers from the South with strong anti-black and anti-Mexican attitudes. Reaction to police brutality resulted in the Watts riots of 1965 and again, after the Rodney King beating, in the Los Angeles riots of 1992. Charges of police brutality dogged the department through the end of the 20th century. In the late 1990s, as a result of the Rampart scandal involving misconduct of 70 officers, the federal government was forced to intervene and assumed jurisdiction of the department with a consent decree. Police reform has since been a major issue confronted by L.A.'s recent mayors. Social critic Mike Davis argued that attempts to "revitalize" downtown Los Angeles decreases public space and further alienates poor and minority populations. This enforced geographical separation of diverse populations goes back to the city's earliest days. LAX: Los Angeles International Airport Mines Field opened as the private airport in 1930, and the city purchased it to be the municipal airfield in 1937. The name became Los Angeles Airport in 1941 and Los Angeles International Airport in 1949. In the 1930s, the main airline airports were Burbank Airport (then known as Union Air Terminal, and later Lockheed) in Burbank and the Grand Central Airport in Glendale. In 1940, the airlines were all at Burbank except for Mexicana's three departures a week from Glendale; in late 1946 most airline flights moved to LAX, but Burbank always retained a few. Since then, the story of LAX has been relentless expansion and the spinoff of hotels and warehouses nearby. World War II: 1941–1945 During World War II, Los Angeles grew as a center for production of aircraft, ships, war supplies, and ammunition. Aerospace employers headquartered in the Los Angeles metropolitan area like Hughes Aircraft Company, Northrop Corporation, Douglas Aircraft Company, Vultee Aircraft (later merged into Convair in 1943), and Lockheed Corporation were able to provide the nation's demand for the war effort in producing strategic bombers and fighter aircraft like B-17s, B-25s, A-36s, and P-51 Mustangs needed to bomb the war machine of the Axis powers. As a result, the Los Angeles area grew faster than any other major metropolitan area in the U.S. and experienced more of the traumas of war while doing so. By 1943, the population of Los Angeles County was larger than 37 states, and was home to one in every 40 U.S. citizens, as millions across the U.S. came to Southern California to find employment in the defense industries. The Japanese-American community in L.A. was greatly impacted since Japan's attack on Pearl Harbor pulled the U.S. into World War II, and America feared that the fifth column was widespread among the community. In response, President Franklin D. Roosevelt issued Executive Order 9066, authorizing military commanders to exclude "any or all persons" from certain areas in the name of national defense. The Western Defense Command began ordering Japanese Americans living on the West Coast to present themselves for "evacuation" from the newly created military zones. This included many Los Angeles families, of which 80,000 were relocated to the Japanese-American internment camps throughout the duration of the war. The war also lured a large number of African Americans from the rural impoverished Southern states to the Los Angeles area in the second chapter of the Great Migration, due to manpower industrial shortages and Executive Order 8802, which prohibited discrimination in wartime defense industries. Lonnie Bunch, a longtime historian with the Smithsonian Institution, wrote, "Between 1942-1945, some 340,000 Blacks settled in California, 200,000 of whom migrated to Los Angeles." Most of these migrants to Los Angeles came from South Central states like Louisiana, Texas, Mississippi, Arkansas, and Oklahoma. African Americans particularly benefited from defense jobs created in Los Angeles County during the war, especially Terminal Island, where it was one of the first places they were integrated into defense-related work on the West Coast. Though Jim Crow laws did not exist in Los Angeles as it had in the South, black migrants continued to face racial discrimination in most aspects of life, especially widespread housing segregation and redlining due to overcrowding and perceived lower property value during and after the war, in which they were restricted from advanced opportunities in affluent white areas and confined to an exclusive-black majority area of South Central Los Angeles. As with a few other wartime industrial cities in the U.S., Los Angeles experienced a racial-related conflict stemming from the Zoot Suit Riots in June 1943, in which American servicemen and civilians of European descent attacked young Mexican-Americans in zoot suits. Many military personnel regarded the zoot suits as unpatriotic and flamboyant in time of war, as they had a lot of fabric, coupled with widespread racism against Mexicans and Mexican-Americans as unintelligent and inferior. The Los Angeles Police Department stood by as the rioting happened and arrested hundreds of Hispanic residents instead of the attacking servicemen and civilians, charging them ranging from "rioting" to "vagrancy". Riots against Latinos in Los Angeles also erupted in a similar fashion in other cities in California, Texas, and Arizona as well as northern cities like Chicago, Philadelphia, and Detroit. While Los Angeles County never faced enemy bombing and invasion, it nevertheless became an integral part of the American Theater on the night of February 24–25, 1942, during the false Battle of Los Angeles, which occurred a day after the Japanese naval bombardment of Ellwood in Santa Barbara, California, 80 miles from Los Angeles. Reacting to a report that enemy planes had been spotted over L.A., anti-aircraft gunners stationed in the city fired on the approaching aircraft what was later known to be a U.S. Army weather balloon. Lasted for two hours, a total of five people died in the "Battle of Los Angeles", owning from car crashes in the confusing darkness to people having heart attacks due to loud anti-aircraft gun bursts. In spite of this, the Japanese had plans to actually bomb Los Angeles with giant seaplanes in anticipation of the proposed large-scale invasion of the continental United States. Those raids never came about, but the Japanese had the planes and wherewithal to accomplish such a raid throughout the war. Postwar: Baby boomers After the war, hundreds of land developers bought land cheap, subdivided it, built on it, and got rich. Real-estate development replaced oil and agriculture as Southern California's principal industry. In July 1955, Walt Disney opened the world's first theme park called Disneyland in Anaheim. Nine years later, Universal Studios opened its first theme park with the public studio tour tram at Universal City near L.A. This later touched off a theme park war between Disney and Universal that continue on to the present day. In 1958, Major League Baseball's Dodgers and Giants left New York City and came to Los Angeles and San Francisco, respectively. The population of California expanded dramatically, to nearly 20 million by 1970. This was the coming-of-age of the baby boom. By 1950, Los Angeles was an industrial and financial giant created by war production and migration. Los Angeles assembled more cars than any city other than Detroit, made more tires than any city but Akron, Ohio, made more furniture than Grand Rapids, Michigan, and stitched more clothes than any city except New York. In addition, it was the national capital for the production of motion pictures, Army and Navy training films, radio programs and, within a few years, television shows. Construction boomed as tract houses were built in ever expanding suburban communities financed by the GI Bill for veterans and the Federal Housing Administration. Popular music of the period bore titles such as "California Girls", "California Dreamin'", "San Francisco", "Do You Know the Way to San Jose?" and "Hotel California". These reflected the Californian promise of easy living in a paradisiacal climate. The surfing culture burgeoned. Los Angeles continued to spread, particularly with the development of the San Fernando Valley and the building of the freeways launched in the 1940s. When the local street car system went out of business, Los Angeles became a city built around the automobile, with all the social, health and political problems that this dependence produces. The famed urban sprawl of Los Angeles became a notable feature of the town, and the pace of the growth accelerated in the first decades of the 20th century. The San Fernando Valley, sometimes called "America's Suburb", became a favorite site of developers, and the city began growing past its roots downtown toward the ocean and towards the east. The immense problem with air pollution (smog) that had developed by the early 1970s also caused a backlash: Schools were closed routinely in urban areas for "smog days" when the ozone levels became too unhealthy, and the hills surrounding urban areas were seldom visible even within a mile, Californians were ready for changes. Over the next three decades, California enacted some of the strictest anti-smog regulations in the United States and has been a leader in encouraging nonpolluting strategies for various industries, including automobiles. For example, carpool lanes normally allow only vehicles with two/three or more occupants (whether the base number is two or three depends on what freeway you are on), but electric cars can use the lanes with only a single occupant. As a result, smog is significantly reduced from its peak, although local Air Quality Management Districts still monitor the air and generally encourage people to avoid polluting activities on hot days when smog is expected to be at its worst. Beginning November 6, 1961, Los Angeles suffered three days of destructive brush fires. The Bel-Air—Brentwood and Santa Ynez fires destroyed 484 expensive homes and 21 other buildings along with 15,810 acres (64 km²) of brush in the Bel-Air, Brentwood, and Topanga Canyon neighborhoods. Most of the homes destroyed had wooden shake roofs, which not only led to their own loss but also sent firebrands up to three miles (5 km) away. Despite this, few changes were made to the building codes to prevent future losses. The repeal of a law limiting building height and the controversial redevelopment of Bunker Hill, which destroyed a picturesque though decrepit neighborhood, ushered in the construction of a new generation of skyscrapers. Bunker Hill's 62-floor First Interstate Building (later named Aon Center) was the highest in Los Angeles when it was completed in 1973. It was surpassed by the Library Tower (now called the U.S. Bank Tower) a few blocks to the north in 1990, a 310 m (1,018 ft) building that is the tallest west of the Mississippi. Outside of Downtown, the Wilshire Corridor is lined with tall buildings, particularly near Westwood. Century City, developed on the former 20th Century Fox back lot, has become another center of high-rise construction on the Westside. During the latter decades of the 20th century, the city saw a massive increase of street gangs. At the same time, crack cocaine became widely available and dominated by gangs in the 1980s. Although gangs were disproportionately confined to lower-income inner-city sections, fear knew no boundaries citywide. Since the early 1990s, the city saw a decrease in crime and gang violence with rising prices in housing, revitalization, urban development, and heavy police vigilance in many parts of the city. With its reputation, it had led to Los Angeles being referred as "The Gang Capital of America". A subway system, developed and built through the 1980s as a major goal of mayor Tom Bradley, stretches from North Hollywood to Union Station and connects to light rail lines that extend to the neighboring cities of Long Beach, Norwalk, and Pasadena, among others. Also, a commuter rail system, Metrolink, has been added that stretches from nearby Ventura and Simi Valley to San Bernardino, Orange County, and Riverside. The funding of the Los Angeles County Metropolitan Transportation Authority project is funded by a half cent tax increase added in the mid-1980s, which yields $400 million every month. Although the regional transit system is growing, subway expansion was halted in the 1990s over methane gas concerns, political conflict, and construction and financing problems during Red Line Subway project, which culminated in a massive sinkhole on Hollywood Boulevard. As a result, the original subway plans have been delayed for decades as light rail systems, dedicated busways, and limited-stop "Rapid" bus routes have become the preferred means of mass transit in LA's expanding series of gridlocked, congested corridors. Racially Restrictive Housing Covenants Racially restrictive housing covenants were a major part of Los Angeles housing development and selling of real estate. Racially restrictive covenants were court approved agreements included in title deeds that prohibited the selling of property to certain races. The first racially restrictive covenant in Los Angeles dates to 1902 and used the term non-Caucasians to restrict people of color from dwelling in that home. Other language used in covenants excluded specific ethnic groups and sometimes only allowed “non-whites” to occupy a property if they were domestic workers. Racially restrictive covenants were implemented by housing developers, real estate agencies, and homeowners associations for the purpose of creating racial and class segregated neighborhoods. Racially restrictive covenants were also implemented in housing developments to secure homogeneous and economically stable neighborhoods. The Janss Investment Company built the community of Westwood. They included racial restrictions in all of their properties that specifically excluded “any person who is not of the white or the Caucasian race”. Examples of communities in Los Angeles that were built with racial restrictions in deeds are Thousand Oaks, Palos Verdes, Beverly Hills, Bel Air, Westchester, Panorama City, Westside Village, and Toluca Woods among others. In 1892, the federal courts ruled that neither state nor city governments could discriminate but upheld the right to enter into racial and class restrictive covenants. In the period between 1900 and 1920 Los Angeles experienced a boom in housing development during which racially restrictive covenants became widespread. By 1939, almost 47% of Los Angeles County residential neighborhoods included racially restrictive covenants. Restricting people of color from many neighborhoods across Los Angeles resulted in the formation of multiracial neighborhoods. These neighborhoods were notably poor and composed of Blacks, Latinos, Asian Americans, Jews, and Italians. Among historically multiracial neighborhoods in Los Angeles are the likes of Boyle Heights, Watts, Belvedere, and South Los Angeles. Racially restrictive covenants were finally overturned in two landmark cases. Shelley V. Kraemer in 1948 prohibited racially restrictive covenants and invalidated their use in court. The 1956 Barrows V. Jackson case the Supreme Court ruled that racially restrictive covenants were unconstitutional under the 14th amendment. It stated that “The enforcement of a covenant forbidding use and occupancy of real estate by non-Caucasians, by an action at law in a state court to recover damages from a co-covenantor for a breach of the covenant, is barred by the Fourteenth Amendment of the Federal Constitution.” In 1964, California voters approved Proposition 14 which attempted to validate housing discrimination. However, the proposition was repealed and deemed unconstitutional by the California Supreme Court. While many home deeds in Los Angeles still contain restrictive covenant clauses, they are not legally enforceable. Since its beginning, the city was divided geographically by ethnicity. By World War II, 95% of Los Angeles housing was off-limits to blacks and Asians. Minorities who had served in World War II or worked in L.A.'s defense industries returned to face increasing patterns of discrimination in housing. More and more, they found themselves excluded from the suburbs and restricted to housing in East or South Los Angeles, Watts, and Compton. Such real-estate practices severely restricted educational and economic opportunities. Historian Peter Radkowski wrote: By the 1960s, the fair housing conflict of California would evolve into a collision of legislative action, racial backlash, and judicial ruling: the Rumford Act on the floors of the state capitol; Proposition 14 at the ballot box; Mulkey v. Reitman before the Supreme Court of California, and Reitman v. Mulkey before the Supreme Court of the United States. These events explicitly shaped a gubernatorial election in California, and arguably set in motion a sea change in political allegiances and presidential elections. In 1955, William Byron Rumford, the first African-American from Northern California to serve in the California State Legislature, introduced a fair-housing bill. In 1959, the California Legislature passed the California Fair Employment Practices Act sponsored by Augustus F. Hawkins of Los Angeles. The same year, the state's Unruh Civil Rights Act addressed fair housing but did not have any teeth. The aggrieved party had to sue to get compensation. In 1963, California Legislature passed and Governor Pat Brown signed the Rumford Fair Housing Act, which outlawed restrictive covenants and the refusal to rent or sell housing on the basis of race, ethnicity, gender, marital status, or physical disability. In reaction to the Rumford Act, a well-funded coalition of realtors and landlords immediately began to campaign for a referendum that would amend the state constitution to protect property owners' ability to deny minorities equal access to housing. Known as Proposition 14, it caused a storm of deep and bitter controversy across the state. Radkowski wrote: The debate over Proposition 14 cultivated a whirlwind of information and misunderstanding, marked by angry exchanges on the merits, and running through the entire debate a plague of bitterness, ill feelings, and slurs. On any given day, the effort to overturn the Rumford Act might involve highbrow jurisprudence, righteous indignation, or racial epithet. In many ways, the Rumford Act played as bawdy and violent as the land and mineral grabs of the original California Gold Rush: Rumford received an invitation to a stag dinner party—complete with one hour of "entertainment"—that was sponsored by the Associated Home Builders of the Greater East Bay; while across the state, pamphlets and pickets revealed the ugly fascist undercurrents of support for Proposition 14. While conservatives such as Cardinal McIntyre of Los Angeles argued that blacks are "better off in Los Angeles than anywhere else", blacks knew that they were kept out of participating in the city's prosperity. On May 26, 1963, Dr. Martin Luther King, Jr. told a crowd of 35,000 at Wrigley Field, "We want to be free whether we're in Birmingham or in Los Angeles." In November 1964, California voters passed Proposition 14 by a wide margin. In August, 1965, the Watts Riots broke out. Lasting six days, it left 32 dead, 1,032 injured, 3,952 arrested, $40 million in damage, and 1,000 buildings damaged or destroyed. According to later reports, the riot was a reaction to a long record of police brutality by the LAPD and other injustices suffered by blacks, including discrimination in jobs, housing, and education. In 1966, the California State Supreme Court in Mulkey v. Reitman ruled that Proposition 14 violated the state constitution's provisions for equal protection and due process. In 1967, in Reitman v. Mulkey, the U.S. Supreme Court confirmed the decision of the California Supreme Court and ruled that Proposition 14 had violated the 14th Amendment of the United States Constitution. The federal Civil Rights Act of 1964 also addressed the issue, but made few provisions for enforcement. In 1973 Los Angeles became the first major Western city to elect a black mayor with Tom Bradley. Economic and demographic changes The last of the automobile factories shut down in the 1990s; the tire factories and steel mills left earlier. Most of the agricultural and dairy operations that were still prospering in the 1950s have moved to outlying counties while the furniture industry has relocated to Mexico and other low-wage nations. Aerospace production has dropped significantly since the end of the Cold War or moved to states with better tax conditions, and movie producers sometimes find cheaper places to produce films, television programs and commercials. However, the film, television and music industries are still based in LA, which is home to large numbers of well-paid stars, executives and technicians. Many studios still operate in Los Angeles, such as CBS Television City at the corner of Fairfax Avenue and Beverly Boulevard and 20th Century Fox in Century City. The manufacture of clothing began on a large scale in the early 20th century. The fashion industry emerged in the 1920s with an emphasis on sportswear and leisure clothing, and expanded after 1945 to second place behind New York. Toyota opened its first overseas office in Hollywood in 1957, and sold 257 cars in the U.S. It moved operations to Torrance in 1982 because of easy access to port facilities and the LAX airport. In 2013 it sold 2.2 million vehicles in the U.S. In 2014, it announced it would move 3000 of its employees to Plano, Texas, near Dallas, to be closer to its American factories. The ports of Los Angeles and Long Beach make up the largest harbor complex in the U.S., handling 44% of all goods imported by cargo container. In 2007, the equivalent of 7.85 million 40-foot shipping containers poured through the ports, with most then moving along the region's highways to massive rail yards and warehouses before heading to the nation's interior. International trade has generated hundreds of thousands of jobs in Southern California. Moving goods is now one of the larger industries in the region, one that helps provide low-cost imports to consumers across the country. The ports are among the region's more valuable economic engines. The overall metropolitan LA economy was healthy, and in one five-year boom period (1985 to 1990), it attracted 400,000 working immigrants (mostly from Asia and Mexico) and about 575,000 workers from elsewhere in the U.S. The jobs offered depended largely on educational qualifications. Half of the immigrants from abroad owed their employment to the immigrant economy with Asian entrepreneurs employing Latino workers. Large-scale economic changes have brought major social changes with them. While unemployment dropped in Los Angeles in the 1990s, the newly created jobs tended to be low-wage jobs filled by recent immigrants; the number of poor families increased from 36% to 43% of the population of Los Angeles County during this time. At the same time, the number of immigrants from Mexico, Central America and Latin America has made Los Angeles a "majority minority" city that will soon be majority Latino. The unemployment rate dropped from 6.9% to 6.8% in 2002, jumped during the recession of 2008, and hovered around 11-12% in 2011. The desire for residential housing in the downtown area has led to gentrification. Historic commercial buildings have been renovated as condos (while maintaining the original outside design), and many new apartment and condominium towers and complexes are being built. By the end of the 20th century, some of the annexed areas began to feel cut off from the political process of the megalopolis, leading to a particularly strong secession movement in the San Fernando Valley and weaker ones in San Pedro and Hollywood. The referendums to split the city were rejected by voters in November 2002. Many communities in Los Angeles have changed their ethnic character over this period of time. For many decades, the population was predominantly white and mostly American-born until the late 20th Century. South L.A. was mostly white until the 1950s, but then became predominantly black until the 1990s, and is now mainly Latino. While the Latino community within the City of Los Angeles was once centered on the Eastside, it now extends throughout the city. The San Fernando Valley, which represented a bastion of white flight in the 1960s and provided the votes that allowed Sam Yorty to defeat the first election run by Tom Bradley, is now as ethnically diverse as the rest of the city on the other side of the Hollywood Hills. The population of Los Angeles reached more than 100,000 with the 1900 census (Los Angeles Evening Express, October 1, 1900), more than a million in 1930, more than two million in 1960, and more than 3 million in 1990. Other articles which contain relevant history sections. - History of Southern California freeways - Los Angeles Times - History of the Los Angeles Police Department - Los Angeles Fire Department History - History of the San Fernando Valley to 1915 - Port of Los Angeles - History of Santa Monica - History of Glendale - History of Beverly Hills - History of Long Beach Articles on specific events in Los Angeles history - “The Overland Migration” - “News by the Goliah” - “From the Texan Border” - “Breckinridge to Visit California.” - “Overland Mail—Southern Route.“ - “His Nose was Scratched.” - Munro, Pamela, et al. Yaara' Shiraaw'ax 'Eyooshiraaw'a. Now You're Speaking Our Language: Gabrielino/Tongva/Fernandeño. Lulu.com: 2008. - McCawley, William. 1996. The First Angelinos: The Indians of Los Angeles. Banning, California: Malki Museum Press and Ballena Press Cooperative. pp. 2–7 - Smith, Gerald A. and James Clifford. 1965. Indian Slave Trade Along the Mojave Trail. San Bernardino California: San Bernardino County Museum. - Johnson, Bernice Eastman. 1962. California's Gabrielino Indians. Highland Park, California: Southwest Museum Papers. - Bosca, Gerónimo. "Chinigchinish: An Historical Account of the Origins, Customs, and Traditions of the Indians of Alta California", in Life in California, trans. Alfred Robinson. Santa Barbara: Peregrine. - Miller, Bruce. 1991. The Gabrielino. Los Osos, California: Sand River Press. - Kealhofer, 1991. Cultural Interaction During the Spanish Colonial Period. Ph.D. dissertation, University of Pennsylvania, 1991. - Estrada, William David. 2008. The Los Angeles Plaza: Sacred and Contested Space. Austin, Texas: University of Texas Press. - Low, Setha M. 2000. On the Plaza: The Politics of Public Space and Culture. Austin, Texas: University of Texas Press. - Cruz, Gilberto R. 1988. Let There Be Towns: Spanish Municipal Origins in the American Southwest, 1610–1810. College Station, Texas: A&M University Press. - Bancroft, Hubert Howe. 1886. History of California. 7 volumes. San Francisco: History Company. - Kelsey, Harry. 1976. "A New Look at the Founding of Los Angeles." Historical Society of Southern California Quarterly. 55:4, Winter. pp. 326–339. - "The founder of the city of Los Angeles". www.turismobailen.es. Retrieved 11 December 2018. - Ríos-Bustamante, Antonio. Mexican Los Ángeles: A Narrative and Pictoral History, Nuestra Historia Series, Monograph No. 1. (Encino: Floricanto Press, 1992), 50–53. OCLC 228665328. - Bob Pool, "City of Angels' First Name Still Bedevils Historians." Los Angeles Times (March 26, 2005). - Layne, James Gregg. 1935. Annals of Los Angeles 1769–1861, Special Publication No. 9. San Francisco: California Historical Society. p. 30. - Crouch, Dora P., Daniel J. Garr, and Axel I Mundigo. 1982. Spanish City Planning in North America. Cambridge, Massachusetts: MIT Press. - Gumprecht, Blake. 1999. The Los Angeles River: It's Life, Death, and Possible Rebirth. Baltimore: Johns Hopkins University Press. - Estrada, William David. 2005. "Toypurina, Leader of the Tongva People", Oxford Enchyclopedia of Latinos and Latinas in the United States, ed. Suzanne Oboler and Deena J. Gonzalez, vol. 4, pp. 242–243. New York: Oxford University Press. - Mason, William Marvin. 1975. "Fages' Code of Conduct Toward Indians, 1787." Journal of California Anthropology, 2:1, pp. 90–100. - Forbes, Jack D. 1966. The Tongva of Tujunga to 1801, Archeological Survey Annual Report, appendix 2. Los Angeles: University of California. - Mason, William Marvin. 1978. "The Garrisons of San Diego Presidio: 1770–1794." Journal of San Diego History, 24, no. 4:411. - Northrop, Marie E. ed. 1960. "the Los Angeles Padron of 1844 as Copied from the Los Angeles City Archives." Historical Society of Southern California Quarterly, 42, no. 4, December, 360–417. - "Archived copy". Archived from the original on 2013-10-12. Retrieved 2013-10-06.CS1 maint: archived copy as title (link) - Gonzalez, Michael J. 1998. "The Child of the Wilderness Weeps for the Father of Our Country: The Indian and the Politics of Church and State in Provincial Southern California", in Contested Eden: California Before the Gold Rush, ed. Ramón A. Gutiérrez and Richard J. Orsi. Berkeley: University of California Press. - Iris Higbie Wilson: "Lemuel Carpenter" in The Mountain Men and the Fur Trade of the Far West, LeRoy R. Hafen, ed., The Arthur H. Clark Co., Glendale, California, 1972, pp. 33–40. - Hubert Howe Bancroft: California Pioneer Register and Index 1542–1848, Regional Publishing Co., Baltimore, Maryland, 1964, p. 82. - Charles Russell Quinn: History of Downey, The Life Story of a Pioneer Community, and of the Man who Founded it – California Governor John Gately Downey – From Covered Wagon to the Space Shuttle, Elena Quinn, Downey, California, 1973, pp. 12, 20–22, 32, 104–105, et al. - John Bidwell: "First-Person Narratives of California's Early Years, 1849–1900", Library of Congress Historical Collections, "American Memory": John Bidwell (Pioneer of '41): Life in California Before the Gold Discovery, from the collection "California As I Saw It." - Gaughan, Tim (June 19, 2009). "Where the valley met the vine: The Mexican period". Napa Valley Register. Napa, California: Lee Enterprises, Inc. Retrieved September 30, 2011. - Foucrier, Annick. Op. Cit. Page 53 - McGroarty, John Steven. History of Los Angeles County. The American Historical Society. Chicago and New York 1923. Page 31 - Ryan, Mary P. 2006. "A Durable Center of Urban Space: The Los Angeles Plaza." Urban History, 33, part 3, December, p. 464 - "Robinson, William Wilcox. 1966. Maps of Los Angeles; From Ord's Survey of 1849 to the Boom of the Eighties. Los Angeles: Dawson's Book Shop. - Robert Glass Cleland, A History of California: The American Period, The Macmillan company, 1922. Chapter XXI - Robinson, William Wilcox. 1981. Los Angeles from the Days of the Pueblo: A Brief History and Guide to the Plaza Area. San Francisco: California Historical Society. - Charles Dwight Willard, The Herald's History of Los Angeles City (Los Angeles: Kingsley-Barnes & Neuner Co., 1901), 280. - Eric Monkkonen, "Western Homicide: The Case of Los Angeles, 1830–1870", Pacific Historical Review, 74 (Nov. 2005), 609. - Villa, Raúl Romero. 2002. Barrio Logos: Place and Space in Urban Chicano Culture and Literature. Austin, Texas: University of Texas Press - Robinson, William Wilcox. 1952. The Indians of Los Angeles: Story of a Liquidation of a People. Los Angeles: Glen Dawson Press. - Cook, Sherburne F. 1971. "The Aboriginal Population of Upper California." In The California Indians: A Sourcebook. ed. R.F. Heizer and M.A. Whipple, 2nd ed. Berkeley: University of California Press - Mathes, Valerie Sherer. 1997. Helen Hunt Jackson and Her Indian Reform Legacy. Norman, Oklahoma: University of Oklahoma Press. - Garrison, Jessica. 2006. "Battle over a Casino Divides Gabrielino Indians." Los Angeles Times, November 26. - Guinn, James Miller (1915). A History of California and an Extended History of Los Angeles and Environs: Also Containing Biographies of Well Known Citizens of the Past and Present (Public domain ed.). Historic Record Company. p. 407. - Timothy Tzeng, "Eastern Promises: The Role of Eastern Capital in the Development of Los Angeles, 1900-1920," California History (2011) 88#2 pp 32-53. - Jess Gilbert and Kevin Wehr, "Dairy Industrialization in the First Place: Urbanization, Immigration, and Political Economy in Los Angeles County, 1920-1970," Rural Sociology (2003) 68#4 pp 467-49 - Nathan Masters (January 17, 2013). "Lost Train Depots of Los Angeles". Socal Focus. KCET. Retrieved July 2014. Check date values in: - Gregory Lee Thompson (1993). The Passenger Train in the Motor Age: California's Rail and Bus Industries, 1910-1941. pp. 14–15. ISBN 9780814206096. - Harrington, Marie. "A Golden Spike: The Beginning". SCV History. Retrieved April 22, 2014. - Queenan, Charles F. (May 10, 1992). "'Great Free Harbor Fight' : At Stake Was the Port Site for the Growing City of L.A." Los Angeles Times. - "Oil and Gas Statistics: 2007 Annual Report" (PDF). California Department of Conservation. December 31, 2007. Retrieved August 25, 2009. - "Dynamite Bomb Fails to Cripple Llewellyn Plant," Los Angeles Times, December 26, 1910, page I-1 Library card required. - Los Angeles Times. 1913. "Rioters Must Face the Law." December 28. - Vicki L. Ruiz and Virginia Sánchez Korrol, eds. (2006). Latinas in the United States: A Historical Encyclopedia. Indiana University Press. pp. 408–10. ISBN 0253111692.CS1 maint: extra text: authors list (link) - McPhee, John. 1989. "Los Angeles Against the Mountains." In The Control of Nature. New York: Farrar Straus Giroux. - Goodridge, James D. 1982. Historic Rainstorms in California: A Study of 1,000-year Rainfalls. Sacramento: State of California Department of Natural Resources. - Gregory Paul Williams, The Story of Hollywood: An Illustrated History. (2006) - "City Swimming Pools Opened to All Races", Los Angeles Times, June 26, 1931, page A1 - Municipal Secession Fiscal Analysis Scoping Study www.valleyvote.net, Annexation and Detachment Map (PDF) lacity.org. - Rayner, Richard. 2009. A Bright and Guilty Place: Murder, Corruption, and L.A.'s Scandalous Coming of Age. New York: Doubleday - Domanick, Joe (January 25, 1998). "Public Corruption, L.A.-Style: Where Have the Notorious Gone?". Los Angeles Times. pp. M-6. - Sitton, Tom. 2005. Los Angeles Transformed: Fletcher Bowron's Urban Reform Revival, 1938–1953. Albuquerque: University of New Mexico Press - Davis, M. 1999. "Fortress Los Angeles: The militarization of public space." In M. Sorkin ed., Variations on a theme park: The new American city and the end of public space. pp. 154–180. New York: Hill and Wang.Online - William A. Schoneberger, Ethel Pattison, and Lee Nichols, Los Angeles International Airport (2009). - California and the Second World War: Los Angeles Metropolitan Area during World War II - "Historic Resources Associated with African Americans in Los Angeles". National Park Service. - Mike Sonksen (September 13, 2017). "The History of South Central Los Angeles and Its Struggle with Gentrification". KCET. - Dan Kopf (January 28, 2016). "The Great Migration: The African American Exodus from The South". Priceonomics. - The Second Great Migration - Cameron McCoy (December 5, 2012). "L.A. City Limits: African American Los Angeles from the Great Depression to the Present by Josh Sides (2003)". NotEvenPast. - "Child killing sparks action against Los Angeles gangs." The Christian Science Monitor. September 25, 1995. Volume 87, Issue 210. Page 4. - Pelisek, Christine. "Avenues of Death." LA Weekly. July 14, 2005. - Sanchez, George J. “Why Are Multiracial Communities so Dangerous? A Comparative Look at Hawai'i; Cape Town, South Africa; and Boyle Heights, California.” Pacific Historical Review, vol. 86, no. 1, 2017, pp. 6. - Janss Investment Company v. Walden, 196 Cal. 753 (1925) - Avila, Eric. Popular Culture in the Age of White Flight: Fear and Fantasy in Suburban Los Angeles. University of California Press, 2004. pp. 40 - Deverell, William, and Greg. Hise. A Companion to Los Angeles. Wiley-Blackwell, 2010. pp. 57 - Redford, Laura. “The Intertwined History of Class and Race Segregation in Los Angeles.” Journal of Planning History, vol. 16, no. 4, 2017, pp. 305–322. - Avila, Eric. Popular Culture in the Age of White Flight: Fear and Fantasy in Suburban Los Angeles. University of California Press, 2004. - Sides, Josh. L.A. City Limits. University of California Press, 2003. pp. 95 - Barrows v. Jackson, 346 U.S. 249 (1953) - Radkowski, Peter P. F. III. 2006. "Managing the Invisible Hand of the California Housing Market, 1942–1967." Accessed online 11.14.09: http://www.law.berkeley.edu/files/radkowski_paper.pdf Archived 2012-02-29 at the Wayback Machine - Jeffries,Vincent & Ransford, H. Edward. 1969. "Interracial Social Contact and Middle-Class White Reaction to the Watts Riot". Social Problems 16.3,: 312–324. - Civil Rights Act of 1964 - Bauman, Robert. 2007. "The Black Power and Chicano Movements in the Poverty Wars in Los Angeles", Journal of Urban History, vol 33 no.2, pp.277–295 - Bauman, Robert. 2008. From Watts to East L.A.: Race and the War on Poverty in Los Angeles'.' Norman. Oklahoma: University of Oklahoma Press. - Sides, Josh. 2003. L.A. City Limits: African American Los Angeles from the Great Depression to the Present Berkeley: University of California Press. - Sarah Williams and Elizabeth Currid-Halkett, "The Emergence of Los Angeles as a Fashion Hub: A Comparative Spatial Analysis of the New York and Los Angeles Fashion Industries," Urban Studies (2011) 48#14 pp 3043-3066. - Fujita, Akiko (May 16, 2014). "Toyota built Torrance into the second-largest home of Japanese Americans. Now, it's leaving". The World. Public Radio International. Retrieved 4 October 2016. - Hirsch, Jerry (April 28, 2014). "Toyota to uproot from California, move to 'macho' Texas". Los Angeles Times. Retrieved 4 October 2016. - Mark Ellis and Richard Wright, "The Industrial Division of Labor Among Immigrants and Internal Migrants to the Los Angeles Economy," International Migration Review (1999) 33#1 pp 26-54. - Ivan Light et al. "Immigrant Incorporation in the Garment Industry of Los Angeles," International Migration Review (1999) 33#1 pp 5-25. - Gibbs Smith, 2006. Picturing Los Angeles, ISBN 978-1-58685-733-2 - California – Race and Hispanic Origin for Selected Cities and Other Places: Earliest Census to 1990 - John Buntin, L.A. Noir, 2009 ISBN 978-0-307-35207-1. - Paris Inn history - This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Los Angeles". Encyclopædia Britannica. 17 (11th ed.). Cambridge University Press. pp. 12–14. - Spanish and Mexican history Source: University of Southern California Project: Los Angeles: Past, Present, and Future, 1996. Adopted by the El Pueblo de Los Angeles Historical Monument. Some of the best history appears in the appropriate chapters of the multivolume history of California by Kevin Starr, including Americans and the California Dream, 1850–1915 (1973), focuses on novelists; Inventing the Dream: California through the Progressive Era (1986); Material Dreams: Southern California through the 1920s(1991). Guides, architecture, geography - Herman, Robert D. Downtown Los Angeles: A Walking Guide (2004) 270 pages - Fodor. Los Angeles: plus Disneyland & Orange County ed. by Maria Teresa Burwell, (2007) 368 pages excerpt and text search - Mahle, Karin, and Martin Nicholas Kunz. Los Angeles: Architecture & Design (2004) 191pp - Nelson, Howard J. The Los Angeles Metropolis. (1983). 344 pp. geography - Pitt, Leonard and Dale Pitt. Los Angeles A to Z: An Encyclopedia of the City and County. (1997). 605 pp. short articles by experts excerpts and text search - Abu-Lughod, Janet L. New York, Chicago, Los Angeles: America's Global Cities (1999) online edition - Dear, Michael J., H. Eric Schockman, and Greg Hise, eds. Rethinking Los Angeles (1996) interprets LA in terms of "postmodern urbanism" model. It consists of several fundamental characteristics: a global-local connection; a ubiquitous social polarization; and a reterritorialization of the urban process in which hinterland organizes the center (in direct contradiction to the Chicago School model of cities). The resultant urbanism is distinguished by a centerless urban form termed "keno capitalism." - Fine, David. Imagining Los Angeles: A City in Fiction. (2000). 293 pp. - Flanigan, James. Smile Southern California, You're the Center of the Universe: The Economy and People of a Global Region (2009) excerpt and text search - Fulton, William. The Reluctant Metropolis: The Politics of Urban Growth in Los Angeles. (1997). 395 pp. - Gottlieb, Robert. Reinventing Los Angeles: Nature and Community in the Global City (2007) excerpt and text search - Scott, Allen J. and Soja, Edward W., eds. The City: Los Angeles and Urban Theory at the End of the Twentieth Century. (1996). 483 pp. - Abu-Lughod, Janet L. New York, Chicago, Los Angeles: America's Global Cities (U of Minnesota Press, 1999), Compares the three cities in terms of geography, economics and race from 1800 to 1990 - Bills, Emily, "Connecting Lines: L.A.'s Telephone History and the Binding of the Region", Southern California Quarterly, 91 (Spring 2009), 27–67. - Bollens, John C. and Geyer, Grant B. Yorty: Politics of a Constant Candidate. (1973). 245 pp. Mayor 1961–73 - Brook, Vincent. Land of Smoke and Mirrors: A Cultural History of Los Angeles (Rutgers University Press; 2013) 301 pages - Emerson, Charles. 1913: In Search of the World Before the Great War (2013) compares Los Angeles to 20 major world cities; pp 194–205. - Fogelson, Robert M. The Fragmented Metropolis: Los Angeles, 1850–1930 (1967), focus on planning, infrastructure, water, and business - Friedricks, William. Henry E. Huntington and the Creation of Southern California (1992), on Henry Edwards Huntington (1850–1927), railroad executive and collector, who helped build LA and Southern California through the Southern Pacific railroad and also trolleys. - Garcia, Matt. A World of Its Own: Race, Labor, and Citrus in the Making of Greater Los Angeles, 1900–1970. (2001). 330 pp. - Hart, Jack R. The Information Empire: The Rise of the Los Angeles Times and The Times Mirror Corporation. (1981). 410 pp. - Jaher, Frederic Cople. The Urban Establishment: Upper Strata in Boston, New York, Charleston, Chicago, and Los Angeles. (1982). 777 pp. - Klein, Norman M. and Schiesl, Martin J., eds. 20th Century Los Angeles: Power, Promotion, and Social Conflict. (1990). 240 pp. - Laslett, John H.M. Sunshine Was Never Enough: Los Angeles Workers, 1880–2010 (2012) - Lavender, David. Los Angeles, Two Hundred Years. (1980). 240 pp. heavily illustrated popular history - Leader, Leonard. Los Angeles and the Great Depression. (1991). 344 pp. - McNamara, Kevin R., ed. The Cambridge companion to the literature of Los Angeles (Cambridge University Press, 2010) - Mullins, William H. The Depression and the Urban West Coast, 1929–1933: Los Angeles, San Francisco, Seattle, and Portland. (1991). 176 pp. - Nicolaides, Becky M. My Blue Heaven: Life and Politics in the Working-Class Suburbs of Los Angeles, 1920–1965. (2002). 412 pp. - O'Flaherty, Joseph S. An End and a Beginning: The South Coast and Los Angeles, 1850–1887. (1972). 222 pp. - O'Flaherty, Joseph S. Those Powerful Years: The South Coast and Los Angeles, 1887–1917 (1978). 356 pp. - Payne, J. Gregory and Ratzan, Scott C. Tom Bradley: The Impossible Dream. (1986). 368 pp., mayor 1973 to 1993 and a leading African American - Raftery, Judith Rosenberg. Land of Fair Promise: Politics and Reform in Los Angeles Schools, 1885–1941. (1992). 284 pp. - Rolle, Andrew. Los Angeles: From Pueblo to City of the Future. (2d. ed. 1995). 226 pp.; the only historical survey by a scholar - Sitton, Tom and Deverell, William, eds. Metropolis in the Making: Los Angeles in the 1920s. (2001). 371 pp. - Verge, Arthur C. Paradise Transformed: Los Angeles during the Second World War. (1993). 177 pp. - Verge, Arthur C. "The Impact of the Second World War on Los Angeles" Pacific Historical Review 1994 63(3): 289-314. 0030–8684 in JSTOR Planning, environment and autos - Bottles, Scott L. Los Angeles and the Automobile: The Making of the Modern City. (1987). 302 pp. - Davis, Margaret Leslie. Rivers in the Desert: William Mulholland and the Inventing of Los Angeles. (1993). 303 pp. - Davis, Mike. City of Quartz: Excavating the Future in Los Angeles. (1990). 462 pp - Desfor, Gene, and Roger Keil. Nature And The City: Making Environmental Policy In Toronto And Los Angeles (2004) 290pp - Deverell, William, and Greg Hise. Land of Sunshine: An Environmental History of Metropolitan Los Angeles (2006) 350 pages excerpt and text search - Dewey, Scott Hamilton. Don't Breathe the Air: Air Pollution and U.S. Environmental Politics, 1945–1970. (2000). 321pp., focuses on LA smog - Hise, Greg. Magnetic Los Angeles: Planning the Twentieth-Century Metropolis. (1997). 294 pp. - Jacobs, Chip, and William Kelly. Smogtown: The Lung-Burning History of Pollution in Los Angeles (2008) - Keane, James Thomas. Fritz B. Burns and the Development of Los Angeles: The Biography of a Community Developer and Philanthropist. (2001). 287 pp. - Longstreth, Richard. The Drive-In, the Supermarket, and the Transformation of Commercial Space in Los Angeles, 1914–1941. (1999). 248 pp. - Longstreth, Richard. City Center to Regional Mall: Architecture, the Automobile, and Retailing in Los Angeles, 1920–1950. (1997). 504 pp. - Mulholland, Catherine. William Mulholland and the Rise of Los Angeles. (2000). 411 pp. online edition - Post, Robert C. Street Railways and the Growth of Los Angeles (1989). 170pp. - Rajan, Sudhir Chella. The Enigma of Automobility: Democratic Politics and Pollution Control. (1996). 202 pp . - Sloane, David C. Planning Los Angeles (2012) - Balio, Tino. Grand Design: Hollywood as a Modern Business Enterprise, 1930–1939. (1993). 483 pp. - May, Lary. The Big Tomorrow: Hollywood and the Politics of the American Way (2000) - Schatz, Thomas. The Genius of the System: Hollywood Filmmaking in the Studio Era. (1988). 492 pp. - * Smith, Catherine Parsons. Making Music in Los Angeles: Transforming the Popular. University of California Press, 2007. (A social history covering c. 1887–1940) - Vaughn, Stephen. Ronald Reagan in Hollywood: Movies and Politics. (1994). 359 pp. - Wells, Walter. Tycoons and Locusts: A Regional Look at Hollywood Fiction of the 1930s (1973) online edition Ethnicity, race and religion - Abelmann, Nancy and Lie, John. Blue Dreams: Korean Americans and the Los Angeles Riots. (1995). 272 pp. - Acuña, Rodolfo F. Anything but Mexican: Chicanos in Contemporary Los Angeles. (1996). 328 pp. - Allen, James P. and Turner, Eugene. The Ethnic Quilt: Population Diversity in Southern California. (1997). 282 pp. - Arnold, Bruce Makoto. "Pacific Childhood Dreams and Desires in the Rafu: Multiple Transnational Modernisms and the Los Angeles Nisei, 1918-1942". - Bedolla, Lisa García. Fluid borders: Latino power, identity, and politics in Los Angeles (2005) 278 pages; excerpt and text search - Cannon, Lou. Official Negligence: How Rodney King and the Riots Changed Los Angeles and the LAPD. (1997). 698 pp. online edition - Degraaf, Lawrence B. "The City of Black Angels: Emergence of the Los Angeles Ghetto, 1890–1930". Pacific Historical Review 1970 39(3): 323–352. in JSTOR - Engh, Michael E. "'A Multiplicity and Diversity of Faiths': Religion's Impact on Los Angeles and the Urban West, 1890–1940", Western Historical Quarterly 1997 28(4): 462–492. 0043-3810 in JSTOR - Engh, Michael E. Frontier Faiths: Church, Temple, and Synagogue in Los Angeles, 1846–1888. (1992). 267 pp. - Greenwood, Roberta S., ed. Down by the Station: Los Angeles Chinatown, 1880–1933. (1996). 207 pp. - Griswold del Castillo, Richard. The Los Angeles Barrio, 1850–1890: A Social History. (1979). 217 pp. - Gutierrez, Ramon A., and Patricia Zavella, eds. Mexicans in California: Transformations and Challenges essays by leading scholars (2009) - Hamilton, Nora and Chinchilla, Norma Stoltz. Seeking Community in a Global City: Guatemalans and Salvadorans in Los Angeles. (2001). 296 pp. - Hayashi, Brian Masaru. "For the Sake of Our Japanese Brethren": Assimilation, Nationalism, and Protestantism among the Japanese of Los Angeles, 1895–1942 (1995). 217 pp. - Horne, Gerald. Fire This Time: The Watts Uprising and the 1960s. (1995). 424 pp. - Keil, Roger. Los Angeles: Globalization, Urbanization, and Social Struggles. (1998). 295 pp. - Leclerc, Gustavo; Villa, Raúl; and Dear, Michael, eds. Urban Latino Cultures: La Vida Latina en L.A. (1999). 214 pp. - Loza, Steven. Barrio Rhythm: Mexican American Music in Los Angeles. (1993). 320 pp. - Min, Pyong Gap. Caught in the Middle: Korean Communities in New York and Los Angeles. (1996). 260 pp. - Modell, John. The Economics and Politics of Racial Accommodation: The Japanese of Los Angeles, 1900–1942. (1977). 201 pp. - Monroy, Douglas. Rebirth: Mexican Los Angeles from the Great Migration to the Great Depression. (1999). 322 pp. - Moore, Deborah Dash. To the Golden Cities: Pursuing the American Jewish Dream in Miami and L.A. (1994). 358 pp. - Oberschall, Anthony. "The Los Angeles Riot of August 1965", Social Problems, Vol. 15, No. 3 (Winter, 1968), pp. 322–341 in JSTOR, black riots in Watts - Ong, Paul, ed. The New Asian Immigration in Los Angeles and Global Restructuring. (1994). 330 pp. - Ríos-Bustamante, Antonio and Castillo, Pedro. An Illustrated History of Mexican Los Angeles, 1781–1985. (1986). 196 pp. - Saito, Leland T. Race and Politics: Asian Americans, Latinos, and Whites in a Los Angeles Suburb. (1998). 250 pp. - Sánchez, George J. Becoming Mexican American: Ethnicity, Culture, and Identity in Chicano Los Angeles, 1900–1945. (1993). 367 pp. online edition - Sides, Josh. L. A. City Limits: African American Los Angeles from the Great Depression to the Present (2003) online edition - Valle, Victor M. and Torres, Rodolfo D. Latino Metropolis. (2000). 249 pp. - Waldinger, Roger and Bozorgmehr, Mehdi, eds. Ethnic Los Angeles. (1996). 497 pp. studies by sociologists - Weber, Francis J. Magnificat: The Life and Times of Timothy Cardinal Manning. (1999). 729 pp. The Catholic archbishop from 1970 to 1985. - Weber, Francis J. His Eminence of Los Angeles: James Francis Cardinal McIntyre. (1997). 707 pp. Catholic archbishop from 1948 to 1970. - Weber, Francis J. Century of Fulfillment: The Roman Catholic Church in Southern California, 1840–1947. (1990). 536 pp. Collections of primary sources - Caughey, John and LaRee Caughey, eds. Los Angeles: Biography of a City. (1976). 510 pp. short excerpts from primary and secondary sources - Diehl, Digby, ed. Front Page: 100 Years of the Los Angeles Times, 1881–1981. (1981). 287 pp. - Rodríguez, Luis. Always Running: La Vida Loca: Gang Days in L.A. (1993); autobiographical novel online edition - Violence in the City—An End or a Beginning?, A Report by the Governor's Commission on the Los Angeles Riots, 1965 Official Report online, report on 1965 black riot in Watts; called the "McCone Report" after its chairman |Wikimedia Commons has media related to History of Los Angeles.| - Los Angeles in the 1900s, a collection of newspaper articles and illustrations from 1900 through 1909 - Historical archives of the Los Angeles Fire Department - LA as Subject (KCET.org) - 1947 project about a range of Los Angeles history - City streets that existed in 1903–4 but may exist no longer - Changing Times: Los Angeles in Photographs, UCLA Library Digital Collections
EMIT (Earth Surface Mineral Dust Source Investigation) was built to help scientists understand how dust affects climate. It can also pinpoint emissions of the potent greenhouse gas. NASA’s Earth Surface Mineral Dust Source Investigation (EMIT) mission is mapping the prevalence of key minerals in the planet’s dust-producing deserts. This is crucial information that will help advance our understanding of airborne dust’s effects on climate. However, EMIT has demonstrated another critical capability: detecting the presence of methane, a potent greenhouse gas. According to the U.S. Environmental Protection Agency (EPA), methane is more than 25 times as potent as carbon dioxide at trapping heat in the atmosphere. EMIT was installed on the International Space Station (ISS) in July. In the data it has collected since, the science team has identified more than 50 “super-emitters” in Central Asia, the Middle East, and the Southwestern United States. Super-emitters are facilities, equipment, and other infrastructure that emit methane at exceptionally high rates. They are typically in the fossil fuel, waste, or agriculture sectors. “Reining in methane emissions is key to limiting global warming. This exciting new development will not only help researchers better pinpoint where methane leaks are coming from, but also provide insight on how they can be addressed – quickly,” said NASA Administrator Bill Nelson. “The International Space Station and NASA’s more than two dozen satellites and instruments in space have long been invaluable in determining changes to the Earth’s climate. EMIT is proving to be a critical tool in our toolbox to measure this potent greenhouse gas – and stop it at the source.” Methane absorbs infrared light in a unique pattern – called a spectral fingerprint – that EMIT’s imaging spectrometer can discern with great precision and accuracy. Carbon dioxide can also be measured by the instrument. The new observations stem from the broad coverage of the planet afforded by the space station’s orbit, as well as from EMIT’s ability to scan swaths of Earth’s surface dozens of miles wide while resolving areas as small as a soccer field. “These results are exceptional, and they demonstrate the value of pairing global-scale perspective with the resolution required to identify methane point sources, down to the facility scale,” said David Thompson. “It’s a unique capability that will raise the bar on efforts to attribute methane sources and mitigate emissions from human activities.” Thompson is EMIT’s instrument scientist and a senior research scientist at NASA’s Jet Propulsion Laboratory (JPL) in Southern California, which manages the mission. Relative to carbon dioxide, methane makes up a fraction of human-caused greenhouse-gas emissions, but it’s estimated to be 80 times more effective, ton for ton, at trapping heat in the atmosphere in the 20 years after release. Moreover, where carbon dioxide lingers for centuries, methane persists for about a decade, meaning that if emissions are reduced, the atmosphere will respond in a similar timeframe, leading to slower near-term warming. Identifying methane point sources can be a key step in the process. With knowledge of the locations of big emitters, operators of facilities, equipment, and infrastructure giving off the gas can quickly act to limit emissions. EMIT’s methane observations came as scientists verified the accuracy of the imaging spectrometer’s mineral data. Over its mission, EMIT will collect measurements of surface minerals in arid regions of Africa, Asia, North and South America, and Australia. The data will help researchers better understand airborne dust particles’ role in heating and cooling Earth’s atmosphere and surface. “We have been eager to see how EMIT’s mineral data will improve climate modeling,” said Kate Calvin, NASA’s chief scientist and senior climate advisor. “This additional methane-detecting capability offers a remarkable opportunity to measure and monitor greenhouse gases that contribute to climate change.” Detecting Methane Plumes The mission’s study area coincides with known methane hotspots around the world, enabling researchers to look for the gas in those regions to test the capability of the imaging spectrometer. “Some of the plumes EMIT detected are among the largest ever seen – unlike anything that has ever been observed from space,” said Andrew Thorpe, a research technologist at JPL leading the EMIT methane effort. “What we’ve found in a just a short time already exceeds our expectations.” For example, the instrument detected a plume about 2 miles (3.3 kilometers) long southeast of Carlsbad, New Mexico, in the Permian Basin. One of the largest oilfields in the world, the Permian spans parts of southeastern New Mexico and western Texas. In Turkmenistan, EMIT identified 12 plumes from oil and gas infrastructure east of the Caspian Sea port city of Hazar. Blowing to the west, some plumes stretch more than 20 miles (32 kilometers). The team also identified a methane plume south of Tehran, Iran, at least 3 miles (4.8 kilometers) long, from a major waste-processing complex. Methane is a byproduct of decomposition, and landfills can be a major source. Scientists estimate flow rates of about 40,300 pounds (18,300 kilograms) per hour at the Permian site, 111,000 pounds (50,400 kilograms) per hour in total for the Turkmenistan sources, and 18,700 pounds (8,500 kilograms) per hour at the Iran site. Who are the biggest methane emitters? China, the United States, Russia, India, Brazil, Indonesia, Nigeria, and Mexico are estimated to be responsible for nearly half of all anthropogenic methane emissions. The major methane emission sources for these countries vary greatly. For example, a key source of methane emissions in China is coal production, whereas Russia emits most of its methane from natural gas and oil systems. The largest sources of methane emissions from human activities in the United States are oil and gas systems, livestock enteric fermentation, and landfills. The Turkmenistan sources together have a similar flow rate to the 2015 Aliso Canyon gas leak, which exceeded 110,000 pounds (50,000 kilograms) per hour at times. The Los Angeles-area disaster was among the largest methane releases in U.S. history. With wide, repeated coverage from its vantage point on the space station, EMIT will potentially find hundreds of super-emitters – some of them previously spotted through air-, space-, or ground-based measurement, and others that were unknown. “As it continues to survey the planet, EMIT will observe places in which no one thought to look for greenhouse-gas emitters before, and it will find plumes that no one expects,” said Robert Green, EMIT’s principal investigator at JPL. EMIT is the first of a new class of spaceborne imaging spectrometers to study Earth. One example is Carbon Plume Mapper (CPM), an instrument in development at JPL that’s designed to detect methane and carbon dioxide. JPL is working with a nonprofit, Carbon Mapper, along with other partners, to launch two satellites equipped with CPM in late 2023. More About the Mission EMIT was selected from the Earth Venture Instrument-4 solicitation under the Earth Science Division of NASA Science Mission Directorate and was developed at NASA’s Jet Propulsion Laboratory (JPL), which is managed for the agency by the California Institute of Technology (Caltech) in Pasadena, California. It launched aboard a SpaceX Dragon resupply spacecraft from NASA’s Kennedy Space Center in Florida on July 14, 2022. The instrument’s data will be delivered to the NASA Land Processes Distributed Active Archive Center (DAAC) for use by other researchers and the public. The International Space Station hosts seven instruments for NASA Earth Science that are providing novel information for understanding our changing planet. can the merhane be harvested & used as fuel ? if we’re stuck w/ the climatic effects of the mehane, can we at least benefit from it’s stored energy ? Not surprisingly, there are some contradictions in the article. First off, it claims, “According to the U.S. … EPA, methane is more than 25 times as potent as carbon dioxide at trapping heat in the atmosphere.” This is generally accepted as being true over the lifetime of the methane, which degrades into CO2 after about a decade. However, it then states, inaccurately, “… it’s estimated to be 80 times more effective, ton for ton, at trapping heat in the atmosphere in the 20 years after release.” However, most egregiously, it quotes NASA Administrator Bill Nelson saying, “Reining in methane emissions is key to limiting global warming.” The concentration of CO2 in the atmosphere is currently above 420ppmv. Methane is currently less than 2ppmv, which is roughly equivalent to 50ppmv of CO2. The EPA websites I’ve found do not provide an estimate of the annual anthropogenic methane emissions. I did find a graph at https://www.epa.gov/climate-indicators/climate-change-indicators-global-greenhouse-gas-emissions that shows total methane emissions in CO2 equivalence (c. 2015) to be less than 20% of the total of CO2 and CH4. (It has been declining for at least 30 years!) Approximately half of the methane is natural and not amenable to reduction by humans. We would be doing well to decrease the anthropogenic emissions by half. Thus, an expensive, concerted effort to reduce anthropogenic methane (CH4) might reduce the CO2 equivalence by 5%. I would hardly consider that “key” to limiting global warming. The impact is almost lost in the noise.
Economics Teacher Resources Find Economics educational ideas and activities Showing 61 - 80 of 16,483 resources Wants and needs are the basis of most economic exchanges. This lecture-based video covers the value of money, monetary exchange, production consumption economy, and exchange rates. Note: The video is not very engaging. Delve into the concept of economic growth with your class members, including why economic growth is important, what causes it, and how can countries encourage it. Do the economic benefits of major sporting events such as the Olympics or the World Cup outweigh the expected costs? Using fundamental economic terms, discover the explicit and implicit costs and benefits for countries that host these worldwide competitions. New Review The Economy in the 1980s Examine economic policies put into place during the Reagan administration. Pupils read an informational passage about the 1980s and the impact of the economic policies. They then respond to seven questions about the text. Consider assigning an additional reading task or requiring learners to mark the text to increase engagement in the reading. High schoolers compare and contrast all of the world's major economic systems. They are also to catergorize the economic systems in terms of ownership and distribution. Students examine the visual aids of this activity to study the costs and benefits of decisions about diet and exercise. They investigate human choice as it affects behavior and in turns effects economics and consequences. High schoolers engage in study of the economic crash of The Stock Market in 1929. They examine the trends of the market at the time and discuss the indicators in classroom small groups. Then suggestions are made as to how this could have been avoided. Students explore the root causes of inflation. In this economics instructional activity, students examine data about Gross Domestic Product (GDP) that is included in the instructional activity. Students also discuss inflation and unemployment statistics. Students examine the functions of the Federal Reserve. In this economics lesson plan, students discover the history and duties of the Federal Reserve as they listen to their instructor lecture on the topic. Students may complete an online interactive quiz at the end of the lesson plan. Students define important vocabulary related to business. In this algebra lesson, students identify the current and historical growth of US gross domestic product. They speculate about the future economic conditions and what would cause it to get better or worse. Eighth graders explore a unit on basic economic principles. On the computer, they demonstrate how to track and simulate the purchase of stocks on the Stock Market. Young scholars, after reading Chapter 1 in the book, "Latino Economics in the United States: Job Diversity," write an essay that compares the cultural as well as the historical factors (experiences with jobs, discrimination, education, etc.) of the three dominant Latino groups that directly affects their current economic positions in this country. Provide your class with the definitions to several key economic concepts related to the Federal Reserve and macroeconomics. Then, engage them in a discussion using the new vocabulary in the context of factual economic data analysis. Twelfth graders collect the data of the leading economic indicators over the last six months and create graphs plotting the data. They analyze/evaluate the data collected in order to predict economic trends for the next six month period. What indicators do economists use to create an education economic forecast? Young finance analysts prepare a professional report to describe their predictions to a client using linked online resources. They complete the assignment which is outlined in four steps and includes researching the GNP and inflation rate, explaining trends of the last year, creating a chart of these trend indicators, and synthesizing information in a forecast report. Consider providing some guidance or introduction to the many resource links to get learners started without becoming overwhelmed. Check all links as some may not work. Students work together to define key terms related to Economics. They rotate between posters as they discover new terms. They discuss how economies function. Students explore the root causes of inflation. In this economics instructional activity, students examine data about Gross Domestic Product (GDP) included in the instructional activity. Students also discuss inflation and unemployment statistics. Upper graders examine a series of graphs that show economic data related to the growth of the GDP. They use the charts and the information provided in lecture to respond to several discussion questions that require critical thinking and data analysis to answer. Pupils learn the fundamental concepts of economics as it relates to government. High schoolers retrieve up-to-date, key economic statistics which provide valuable hints about the state of the future economy. They access websites imbedded in this plan, which enable them to answer economic questions.
Research Methods in Psychology Data Analysis and Interpretation: Part II. Tests of Statistical Significance and the Analysis Story Null Hypothesis Significance Testing (NHST) Null hypothesis testing is used to determine whether mean differences among groups in an experiment are greater than the differences that are expected simply because of error variation (chance). Null Hypothesis Significance Testing (NHST) The first step in null hypothesis testing is to assume that the groups do not differ — that is, that the independent variable did not have an effect (i.e., the null hypothesis — H0). Probability theory is used to estimate the likelihood of the experiment’s observed outcome, assuming the null hypothesis is true. NHST (continued) A statistically significant outcome is one that has a small likelihood of occurring if the null hypothesis is true. We reject the null hypothesis, and conclude that the independent variable did have an effect on the dependent variable. A statistically significant outcome indicates that the difference between means obtained in an experiment is larger than would be expected if error variation alone (i.e., chance) were responsible for the outcome. NHST (continued) How small does the probability have to be in order to decide that a finding is statistically significant? Consensus among members of the scientific community is that outcomes associated with probabilities of less than 5 times out of 100 (p < .05) if the null hypothesis were true are judged to be statistically significant. This is called alpha (α) or the level of significance. NHST (continued) What does a statistically significant outcome tell us? An outcome with a probability just below .05 (and thus statistically significant) has about a 50/50 chance of being repeated in an exact replication of the experiment. As the probability of the outcome of the experiment decreases (e.g., p = .025, p = .01, p = .005), the likelihood of observing a statistically significant outcome (p < .05) in an exact replication increases. APA recommends reporting the exact probability of the outcome. NHST (continued) What do we conclude when a finding is not statistically significant? We do not reject the null hypothesis if there is no difference between groups. However, we don’t necessarily accept the null hypothesis either — that is, we don’t conclude that the independent variable did not have an effect. We cannot make a conclusion about the effect of the independent variable. Some factor in the experiment may have prevented us from observing an effect of the independent variable (e.g., too few participants). NHST (continued) Because decisions about the outcome of an experiment are based on probabilities, Type I or Type II errors may occur. A Type I error occurs when the null hypothesis is rejected, but the null hypothesis is true. That is, we claim that the independent variable is statistically significant (because we observed an outcome with p < .05) when there really is no effect of the independent variable. The probability of a Type I error is alpha — or the level of significance (p = .05). NHST (continued) A Type II error occurs when the null hypothesis is false, but it is not rejected. That is, we claim that the independent variable is not statistically significant (because we observed an outcome with p > .05) when there really is an effect of the independent variable that our experiment missed. Because of the possibility of Type I and Type II errors, researchers are tentative in their claims. We use words such as “support for the hypothesis” or “consistent with the hypothesis” rather than stating that a hypothesis has been “proven.” NHST: Comparing Two Means The appropriate inferential statistical test when comparing two means obtained from different groups of participants is a t -test for independent groups. The appropriate test when comparing two means obtained from the same participants (or matched groups) is a repeated measures (within-subjects) t-test. A measure of effect size should be reported when NHST is used. Comparing Two Means (continued) Independent Groups t-test The t-test for independent groups is defined as the difference between two sample means (e.g., treatment group and control group) divided by the standard error of the mean difference (sM1- M2). The calculation formula is: M1 – M2 t= (n1 - 1) s12 + (n2 – 1) s22 1 + 1 n1 + n2 – 2 n1 n2 Comparing Two Means (continued) Standard deviation and Variance. The standard deviation (SD or s) is a measure of how far on the average a score (X) is from the mean. Formula: ∑(X − M)2 ____________ N−1 The variance (s2) is a measure of variability; it is the square of the standard deviation. Comparing Two Means (continued) Either by using a calculator or a computer, obtain a value for the t statistic. Next, identify the probability associated with the outcome. If a computer and statistical software are used, the probability of the outcome will be presented with the value for t as part of the output. If the value for t is calculated using the formula, the probability of the outcome can be found by using the t table (Table A.2 of the Appendix) with df = N – 2. Comparing Two Means (continued) If the probability of the outcome is less than .05 (p < .05), reject the null hypothesis of no difference between the means, and conclude that the independent variable had a statistically significant effect on the dependent variable. If the probability of the outcome is greater than .05 (p > .05), do not reject the null hypothesis of no difference between the means. With a nonsignificant outcome, we withhold judgment about the effect of the independent variable. Calculate the effect size. Determine the power of the statistical test. Comparing Two Means (continued) A measure of effect size should always be calculated. For two means, Cohen’s d can be calculated using values from the t test: 2t d = √df Sometimes a large effect size can be observed with an outcome that is not statistically significant. This can occur when there is not sufficient power to detect the effect of the independent variable (e.g., too few participants). Comparing Two Means (continued) A repeated measures (within-subjects) t-test is used to test the difference between performance in the treatment condition and the control condition in a repeated measures design or matched groups design. D t = sD where “D-bar” is the mean of difference scores between the treatment and control conditions for each participant and sD is the standard error of difference scores. Comparing Two Means (continued) The standard error of the mean formula: sM = s √N Comparing Two Means (continued) Although the formula for calculating the t statistic in the repeated measures design is slightly different, the procedures for NHST are the same. The t value is obtained, followed by the associated probability value. If p < .05, reject the null hypothesis, and conclude the independent variable had a statistically significant effect on the dependent variable. If p > .05, do not reject the null hypothesis; the outcome of the statistical test was not significant. Data Analysis Involving More Than Two Conditions An experiment can have one independent variable with more than two levels, or an experiment might have two or more independent variables (each with at least two levels) in a complex design. The most frequently used statistical procedure for experiments with more than two conditions is analysis of variance (ANOVA) which uses null hypothesis significance testing (NHST). ANOVA Analysis of variance (ANOVA) is an inferential statistics test used to determine whether an independent variable has had a statistically significant effect on a dependent variable. The logic of ANOVA is based on identifying sources of error variation and systematic variation in the data. In a properly conducted experiment, the differences among participants should be the only source of error variation within each group. The experimental procedures should be held constant within each condition to decrease error variation. ANOVA (continued) The second source of variation in a random groups design is variation between the groups. If the null hypothesis is true (no difference between the groups), any observed difference among the means of the groups can be attributed to error variation — the differences among people in the groups. Thus, when the null hypothesis is assumed to be true, any differences among means in the experiment are attributed to error variation within the groups and error variation between the groups. ANOVA (continued) When the null hypothesis is false (the independent variable has an effect), the means for the conditions of the experiment should be different. An independent variable that has an effect on behavior should produce systematic differences in the means across the conditions of the experiment. Therefore, when the independent variable has an effect on behavior differences among group means, it can be attributed to the effect of the independent variable (systematic variation) plus error variation. ANOVA (continued) The F-test is a statistical test that allows us to determine whether the variation due to the independent variable is larger than what would be expected based on error variation alone. The conceptual definition of the F-test is: F = Variation between groups ---------------------------------Variation within groups ANOVA (continued) Because “variation between groups” can be attributed to error variation plus systematic variation, and “variation within groups” is attributed to error variation, the F ratio can be re-written as: F = Error variation + systematic variation -----------------------------------------------Error variation ANOVA (continued) If the null hypothesis is true, there is no systematic variation between groups (no effect of the independent variable), and the F ratio has an expected value of 1.00. Error variation divided by error variation would equal 1.0. F = (zero) Error variation + systematic variation -----------------------------------------------Error variation ANOVA (continued) As the amount of systematic variation increases (due to the effect of the independent variable), the expected value of the F ratio becomes greater than 1.00. F = (effect of the IV) Error variation + systematic variation -----------------------------------------------Error variation How much greater than 1.00 does the F ratio have to be before we can be confident that it reflects true systematic variation due to the independent variable (and not simply chance factors)? This is where NHST comes in: To be statistically significant, the F value needs to be large enough so that its probability of occurrence if the null hypothesis were true is less than our level of significance (p < .05). ANOVA (continued) The logic of statistical inference with ANOVA is similar to that used with the t-test. The first step is to assume no difference among the means of the experiment. If the omnibus F-test is statistically significant, we reject the null hypothesis of no difference among means. A statistically significant F-test indicates that there is a difference somewhere among the means in the experiment. The statistically significant omnibus, or overall, Ftest does not indicate which means are different. ANOVA (continued) The ANOVA Summary Table provides the information for estimating the sources of variance: between groups (systematic + error variation) and within groups (error variation). Source Sum of Squares (SS) Group (between) 54.55 3 Error (within) 37.20 df Mean Square (MS) F-test 18.18 16 7.80 p .002 2.33 The Mean Square for the “Group” independent variable provides an estimate of systematic variation plus error variation. The Mean Square for “Error” provides an estimate of error variation. The F-test is the Group MS divided by the Error MS (18.18 ÷ 2.33 = 7.80). This F-test is statistically significant because .002 < .05. ANOVA (continued) The between-groups sum of squares is equal to the sum of the differences between the overall mean and the mean of each group, which is then squared The within-group sum of squares is equal to the sum of the differences between each individual score in a group and the mean of each group, which is then squared. The total sum of squares is equal to the sum of the between-groups SS and the within-group SS Mean sum of squares is simply SS divided by df. ANOVA (continued) Because the F-test is statistically significant, we reject the null hypothesis, and conclude that the independent variable had a statistically significant effect on the dependent variable. The significant F-test tells us that the group means in the experiment are different — but it doesn’t tell us which means in the experiment are different. It is essential to examine the means to interpret the effect of the independent variable. Simply finding out whether the effect was statistically significant or not is not sufficient when analyzing data. Calculating Effect Size for Designs with Three or More Independent Groups The effect size for experiments with three or more groups is based on measures of “strength of association.” These measures allow researchers to estimate the amount of variability (variance) in participants’ scores that can be attributed to the effect of the independent variable. Larger effect sizes indicate that the independent variable can account for or “explain” participants’ performance more than smaller effect sizes. In ANOVA, a popular measure of association is eta squared (η2). Effect Size (continued) Eta squared is easily calculated from values found in the ANOVA summary table: η2 = Sum of Squares Between Groups Total Sum of Squares Eta squared can also be calculated simply from the report of an F-test: η2 = (F) (df effect) [(F)(df effect)] + (df error) Effect Size (continued) Another measure of effect size for use with three or more groups is Cohen’s f. Calculate f using values for eta squared: η2 f = 1 - η2 Cohen’s suggested guidelines for interpreting effect sizes using f: Small: f = .10 Medium: f = .25 Large: f = .40 Assessing Power for Independent Groups Designs Suppose a researcher observes an effect size of f = .40 (a “large” effect), but the effect of the independent variable is not statistically significant. Suppose there were 5 participants in each of four conditions (df = 3 for the effect of the independent variable). By referring to Power tables (Table A.5 in the Appendix), the researcher discovers that the power was .26. Power (continued) When power = .26, this indicates that a statistically significant outcome would occur only in approximately onefourth of the attempts to conduct this experiment under these circumstances (i.e., with 5 participants in each of 4 conditions and an effect size =.40). Typically, before they begin their research, researchers identify the number of participants they would need with power = .80 (a statistically significant outcome would occur in 80% of the attempts of an experiment). In this example, the power table indicates we would need 18 participants in each of the 4 conditions for a total of N = 72. Comparisons of Two Means When an independent variable with three or more levels is statistically significant, the next step is to identify which of the group means in the experiment are different: These are called “comparisons of two means.” These comparisons focus on a particular difference between two means. For example, suppose that an experiment has two control groups and one treatment group, and that the F-test for this independent variable with three levels is statistically significant. Comparisons of Two Means (continued) One comparison, in this example, would be to determine whether the mean for the treatment group is significantly different from the average of the means for the two control groups. A t-test can be used to compare the means using the following formula: M1 — M2 t= MSerror 1+ 1 n1 n2 The MSerror comes from the ANOVA summary table — n1 and n2 are the sample sizes associated with each mean in the test. Comparisons of Two Means (continued) The statistical significance of the t-test can be obtained by checking a t-test table (Table A.2 in the text), or by using a computer program in which the observed t value and df are entered, and the exact probability of the result is obtained. One Web site to check is: http://math.uc.edu/~brycw/classes/148/tables.htm Cohen’s d can be calculated for the comparison using the following formula: d = _2 (t) √dferror Repeated Measures Analysis of Variance The general procedures and logic for null hypothesis testing using repeated measures analysis of variance are similar to those used for independent groups analysis of variance. Before beginning the ANOVA for a complete repeated measures design, a summary score (e.g., mean) for each participant must be computed for each condition. Descriptive data are calculated to summarize performance for each condition of the independent variable across all participants. Repeated Measures ANOVA (continued) The primary way that ANOVA differs for repeated measures is in estimation of error variation or residual variation. Residual variation is the variation that remains when systematic variation due to the independent variable and participants is removed from the estimate of total variation. Repeated Measures ANOVA (continued) Variation due to different participants in conditions is eliminated in repeated measures designs because the same individuals participate in each condition. Because this source of variation is eliminated, repeated measures designs are more sensitive than independent groups designs — they are better able to detect the effect of an independent variable when that effect is present. Two-Factor Analysis of Variance for Independent Groups Designs Complex designs have two or more independent variables — each with two or more levels. The ANOVA indicates the statistical significance of main effects of each independent variable and the interaction effect(s) between variables. The analysis of complex designs differs depending on whether an interaction effect is statistically significant or not. Analysis of a Complex Design with an Interaction Effect If the omnibus (overall) ANOVA reveals a statistically significant interaction effect, the source of the interaction is identified using simple main effects analyses and comparisons of two means. A simple main effect is the effect of one independent variable at one level of a second independent variable. If an independent variable has three or more levels, comparisons of two means can be used to examine the source of a simple main effect by comparing means two at a time. After the simple main effects are analyzed, researchers examine the main effects of the independent variables. Analysis of a Complex Design with an Interaction Effect (continued) Confidence intervals may be drawn around group means to provide information regarding how precisely the population means have been estimated. The wider the intervals around the sample means, the less precise the estimate of the population means. A rule of thumb for interpreting confidence intervals is that if the intervals around the means do not overlap, then a difference between the population means is likely. Analysis with No Interaction Effect If an omnibus ANOVA indicates the interaction effect between independent variables is not statistically significant, the next step is to determine whether the main effects of the independent variables are statistically significant. The source of a statistically significant main effect can be specified more precisely by performing comparisons that compare means two at a time and by constructing confidence intervals. Reporting Results of a Complex Design The following should be included when describing the results of a complex design experiment: Description of variables and definition of levels (conditions) of each; Summary statistics for cells of the design in text, table, or figure; including, when appropriate, confidence intervals for group means; Report of F-tests for main effects and interaction effects with exact probabilities. Reporting Results (continued) The following should be included when describing the results of a complex design experiment: Effect size measure for each effect; Statement of power for nonsignificant effects; Simple main effects analysis when interaction effect is statistically significant and comparisons of means two at a time, if appropriate; Verbal description of statistically significant interaction effect (when present), referring reader to differences between cell means across levels of the independent variables. Reporting Results (continued) The following should be included when describing the results of a complex design experiment: Comparisons of two means, when appropriate, to clarify sources of systematic variation among means contributing to main effect; Conclusion that you wish reader to make from the results of this analysis.
This piece discusses general principles of reading and analyzing storybooks, and offers brief descriptions of picture books. It describes how to use the Math Picture Book Analysis Guide with pre-service or in-service teachers. Activity for Teacher Educators If you ask your participants to think about resources for teaching math to young children, they may not suggest picture books. However, just as picture books provide opportunities to develop literacy, they can also be used to promote children's mathematical thinking. When discussing how to use picture books to teach math, it is important to make a distinction between three types of books: those in which math is explicit, those in which math is implicit, and everything else. Explicit math books are written for the express purpose of teaching children math. These may even contain a reference to mathematical concepts in their titles, as in the case of books such as Mouse Shapes by Ellen Stoll Walsh. But other books that also explicitly teach math concepts do not have such titles. For example, Hippos Go Berserk by Sandra Boyton, is actually a counting book, but you would not know that from the title. Other picture books do not explicitly address math, yet the text and illustrations do afford students opportunities to learn about math concepts. A well-known example is Goldilocks and the Three Bears, in which the story involves size comparisons. For example, Papa Bear is the biggest, Mama Bear is medium-sized, and Baby Bear is the smallest. While your participants may not think of Goldilocks as a math story, significant math ideas are implicit in it and important to the plot. Your participants can learn how to use both implicit and explicit math picture books to help children discuss and investigate math ideas. Finally, we have the rather large category of everything else, that is, books that are not written to teach math and which do not contain significant implicit math. Every picture book page has objects arranged in space, an adult reader can always ask the child to count them, or talk about their location (for example, “The hat is on top of his head”). In other words, adults can interject math conversation in all children’s picture books because “math is all around us.” At the same time, we should not ruin an interesting story by interjecting math to the point of distracting attention from the story line. Introduction to Picture Book Analysis An effective way to analyze picture books with participants is to read a picture book together as a whole-group activity. Although they will ultimately be presented with a guide to help them analyze books on their own, this first introduction can be less structured. In this example, we focus on counting. To promote participants' learning about the qualities of books that lend themselves to math conversation, use a good book that also has some text or illustrations that may be confusing or especially challenging for young children. The book should be one well worth reading and should have many positive qualities, such as an interesting story. At the same time, the book should also include problematic text or illustrations that will provide opportunities for interesting group discussion. Consider just this two-page spread from the counting book Ten Little Fish by Audrey Wood. The illustrations in this book are colorful, beautiful and intricate. The text explains that there are eight fish and if you count them, there are. However, the fish on the left are meant to be deeper underwater. So, they are drawn smaller. They are also somewhat obscured by the light hitting the water. So, a very young child might have a hard time seeing and counting the fish on the left, even though they are swimming in a line. In addition, the text indicates that as one of the eight fish jumps out of the water, there are now only seven fish remaining in the water. However, all eight fish are still clearly visible. It may be hard for a child to understand that the number seven only refers to those underwater. This is still an interesting book, but it might be a better choice for children who are already good counters to teach them about simple subtraction. This might not be the best choice for young children just learning to count objects. There are many books with objects that are more easily countable. Math Picture Book Analysis Guide After participants have analyzed a picture book together, introduce the Math Picture Book Analysis Guide (download below). This guide will help participants analyze any book and determine its suitability and usefulness for teaching math to children. For example, the first question in the Math Picture Book Analysis Guide, shown below, helps a participant determine if a particular book is developmentally appropriate. It suggests that if children can count already fluently count forwards and backwards, a book about adding and subtracting by one might be more appropriate. If you used the two-page spread from Ten Little Fish, shown above, you could point out that this book could be appropriate as a counting backward book or a subtraction book. Other questions in the Math Picture Book Analysis Guide will help participants determine which books might be well suited to the children they teach. Go through the other questions in the guide with participants. For example, you can present the two-page spread from Balancing Act by Ellen Stoll Walsh, which is shown below. In this book, two mice each stand on one side of the stick and were balanced. This page shows what happens when a salamander wants to join. Show how questions 2 and 5 from the guide could be used to analyze a book like this. Start with question 2, shown below. Balancing Act is about measurement. The stick with animals on either side is like a balance scale and as animals jump onto either side, the weight on that side changes. Participants might also check “Other” saying that this book involves ideas of equivalence and nonequivalence. Next, ask the participants to consider question 3. Point out that unlike Ten Little Fish, which encouraged the child to count objects, Balancing Act shows measurement in a more general way. Neither weights nor scales are explicitly mentioned, but it is clear that the changing weights are causing the stick not to balance. Move on to question 4. Many children can relate to the idea of balancing. For example, they may also have used see-saws at playgrounds or they may talk about a time when they had to put their arms out to balance while walking along a log. They may be familiar with weights, pointing out that they have seen weights on packages at the grocery store or they have seen their pets get weighed at the vet’s office. Participants can suggest activities that would incorporate ideas of balance and weight. For example, some preschool classrooms have balance scales that allow children to put different different objects on each side to see what happens. Finally, discuss question 5. In a book like Balancing Act, the illustrations from page to page show what happens as more and more animals jump on. It accurately shows the math content, but does not explicitly mention math. Trying Out the Guide and Annotating Books Next, break the participants into small groups to analyze some picture books using the Math Picture Book Analysis Guide as you did as a whole group. Encourage them to check off boxes for each question and write comments. Have participants annotate the picture books using sticky notes. On the first page, make general notes about the book and its usefulness. Annotations for individual pages may include notes about the specific math found on the page, vocabulary (math or otherwise), general reactions or feelings about the book, as well as questions that a teacher might ask a child while reading. Here are some possible annotations for the Balancing Act spread. More detailed descriptions of how to annotate specific math books can be found in the Using Picture Books activities in the content modules. It is important to note, however, that only a small number of these questions should actually be used while reading. Pausing for too long and asking too many questions can easily frustrate children who are engaged in the story and eager to see what happens next. Finally, after analyzing books, participants can select a picture book about a specific topic. Have them plan a lesson around the book. Emphasize that reading books with math content is in many respects no different from reading other books. In both cases, the primary goal is to enjoy and learn from the books. Also, in both cases, the adult reader should employ interactive reading (similar to dialogic reading, that is, reading that engages adult and child in a conversation around reading): The adult asks questions about the book, encourages the child’s attention and participation, and in general takes the child on an intellectual adventure. Most likely, your participants will have studied interactive reading in classes on literacy, but if not, A Teacher's Guide to Reading Math Picture Books (download below) presents the major principles of interactive reading in the context of picture books with significant focus on math. Picture books afford many opportunities to explore children's mathematical thinking. This is true of explicit math books as well as books in which significant math is implicit. The Math Analysis Picture Guide and the other picture book resources mentioned across our modules can help participants analyze and select picture books effectively and use them to teach math concepts.
What’s it: Factors of production are inputs for producing goods and providing services. They consist of land, labor, capital, and entrepreneurship. The last one, entrepreneurship, combines the other three factors of production. Also called resources or inputs. The quantity and quality of resources determine economic growth. When they are more abundantly available and of better quality, it allows the economy to produce more goods and services. What are the types of factors of production? Economists divide the production factors into four categories: Land is not only the site for agriculture or office buildings but also includes natural resources, which we use to produce goods and services. Examples are wood, oil, coal, gas, metals, and other mineral resources. Natural resources consist of two groups, renewable and non-renewable resources. - Renewable resources are available in unlimited quantities because nature is always multiplying them. Examples are air and sunlight. Water is another example, which we can use to generate electricity and for consumption. - Non-renewable resources will run out if we use them irresponsibly. They include oil, natural gas, coal, metals, and other minerals. Some countries, such as Indonesia and China, are rich in natural resources. They then specialize in their extraction and production, including developing their downstream industry. Other countries such as South Korea and Japan have inadequate natural resources. Therefore, they are highly dependent on imports from other countries. To promote economic growth, they specialize not in natural resources but in capital, labor, and entrepreneurship. Businesses use natural resources as inputs in the production process. For example, they use the mineral bauxite to produce aluminum, ultimately sold to various industries such as automobiles and airplanes. Labor includes individual effort, energy, skills, and knowledge used in the production process. Some jobs require more effort and physical exertion than knowledge or skill. While others rely more on skill and knowledge than effort. As with natural resources, the quantity and quality of resources determine how much output can be produced. For example, when hiring more employees, a business can produce goods or services. Likewise, when their quality improves, they are more productive and produce more output than before. In this case, education, experience, and training determine their quality. In this context, capital refers to physical capital or man-made tools used by businesses to produce goods and services. In economics, it excludes financial capital (money). Examples are machinery, equipment, tools, factories, and other buildings. Capital varies by business and type of work. For example, a doctor uses a stethoscope and examination room to provide medical services. Workers in factories use machines to produce food or cars. And, copywriters use computers to create content. Entrepreneurship represents the willingness and ability to take risks in pooling and organizing other resources to produce goods and services. We refer to individuals who do this as entrepreneurs, who are business owners or founders. To realize product ideas, entrepreneurs combine raw materials or inputs, employees, and capital. To get all three, they need financial capital, for example, from bank loans. Or they invest their own money. To be successful, they need skills such as innovative thinking as well as organizational, management, and leadership skills. The most successful entrepreneurs are innovators. They find new ways to produce goods and services. Think of people like Bill Gates, Steve Jobs, Larry Page, and Sergey Brin. Without their entrepreneurship, you will not benefit from computers, Apple, or surfing online. Why is money not a factor of production? Economists distinguish financial capital (money) from physical capital. They do not categorize financial capital as a resource for producing goods and services. This is because businesses cannot use money directly to produce goods or services. For example, they can’t make canned food out of money. Likewise, they can’t make laptops with money. So, in this case, money is not a productive resource. We can indeed use it to buy physical capital such as machines or computers. However, the contribution is not direct. Have you ever seen a copywriter write content using money? Money only facilitates trade. With money, the copywriter can buy a computer on which he can write content. Why are resources scarce? Resources are scarce. It is available in limited quantities to satisfy our unlimited needs and wants. Consider clothes. The clothes are made of cotton, which is planted in the ground. In cotton-producing countries, not all land is used to grow cotton. Some land is used to grow other products such as soybeans or rice, which we also need. Another example is workers in a clothing factory. Those who work to cut and sew clothes in factories are also limited. Not everyone works in a clothing factory. Other people need to work in other factories to produce other goods or services we need. Scarcity then forces us to make choices. For example, what products do we need to produce? Cost of using factors of production When we use factors of production, we incur costs. For resource suppliers, it becomes their income, sometimes also referred to as the reward they receive. Here are the rewards for the factors of production: - Rent as compensation for land - Wages for the use of labor - Interest for capital use - Profit for entrepreneurship If we add up the four, that is factor income, which in aggregate is national income. Why are resources important to the economy? Economic growth not only depends on the quantity of production factors but also their quality. Both determine the productive capacity of the economy, i.e., how much goods and services can be produced, both in the short and long run. Why is quality important? Let’s take labor as the first case. When there is more labor (quantity), the business can produce more output. For example, one person produces 10 units. If the company recruits 10 new workers, then the company gets an additional 100 units of output. Then, even with the same number of workers, the company can still produce more output. Companies can increase their productivity (quality), for example, through training. Thus, say, after training, a worker can produce 11 units of output. The second case is capital. Having more machines means higher business production capacity. Likewise, with more sophisticated machines (quality), the company can produce more output from the same amount of input. For example, using a computer can generate more articles than using a typewriter. Quality is why resource-poor countries such as South Korea and Japan can grow into developed countries. They invest in the productivity of labor and capital to be able to produce more. They also encourage entrepreneurship. Entrepreneurs are a vital engine of economic growth, giving rise to some of the world’s major companies, such as Samsung Electronics, Hyundai Motor, and POSCO in South Korea; and Toyota Motor Corporation, SoftBank Group, Mitsubishi UFJ Financial Group, and Sony Corporation in Japan.
Vision is the most important sense for birds, since good eyesight is essential for safe flight, and this group has a number of adaptations which give visual acuity superior to that of other vertebrate groups; a pigeon has been described as "two eyes with wings". The avian eye resembles that of a reptile, with ciliary muscles that can change the shape of the lens rapidly and to a greater extent than in the mammals. Birds have the largest eyes relative to their size within the animal kingdom, and movement is consequently limited within the eye's bony socket. In addition to the two eyelids usually found in vertebrates, it is protected by a third transparent movable membrane. The eye's internal anatomy is similar to that of other vertebrates, but has a structure, the pecten oculi, unique to birds. Birds, unlike humans but like fish, amphibians and reptiles, have four types of colour receptors in the eye. These give birds the ability to perceive not only the visible range but also the ultraviolet part of the spectrum, and other adaptations allow for the detection of polarised light or magnetic fields. Birds have proportionally more light receptors in the retina than mammals, and more nerve connections between the photoreceptors and the brain. Some bird groups have specific modifications to their visual system linked to their way of life. Birds of prey have a very high density of receptors and other adaptations that maximise visual acuity. The placement of their eyes gives them good binocular vision enabling accurate judgement of distances. Nocturnal species have tubular eyes, low numbers of colour detectors, but a high density of rod cells which function well in poor light. Terns, gull and albatrosses are amongst the seabirds which have red or yellow oil drops in the colour receptors to improve distance vision especially in hazy conditions. The eye of a bird most closely resembles that of the reptiles. Unlike the mammalian eye, it is not spherical, and the flatter shape enables more of its visual field to be in focus. A circle of bony plates, the sclerotic ring, surrounds the eye and holds it rigid, but an improvement over the reptilian eye, also found in mammals, is that the lens is pushed further forward, increasing the size of the image on the retina. Most birds cannot move their eyes, although there are exceptions, such as the Great Cormorant. Birds with eyes on the sides of their heads have a wide visual field, useful for detecting predators, while those with eyes on the front of their heads, such as owls, have binocular vision and can estimate distances when hunting. The American Woodcock probably has the largest visual field of any bird, 360° in the horizontal plane, and 180° in the vertical plane. The eyelids of a bird are not used in blinking. Instead the eye is lubricated by the nictitating membrane, a third concealed eyelid that sweeps horizontally across the eye like a windscreen wiper. The nictitating membrane also covers the eye and acts as a contact lens in many aquatic birds when they are under water. When sleeping, the lower eyelid rises to cover the eye in most birds, with the exception of the horned owls where the upper eyelid is mobile. The eye is also cleaned by tear secretions from the lachrymal gland and protected by an oily substance from the Harderian glands which coats the cornea and prevents dryness. The eye of a bird is larger compared to the size of the animal than for any other group of animals, although much of it is concealed in its skull. The Ostrich has the largest eye of any land vertebrate, with an axial length of 50 mm (2 in), twice that of the human eye. Bird eye size is broadly related to body mass. A study of five orders (parrots, pigeons, petrels, raptors and owls) showed that eye mass is proportional to body mass, but as expected from their habits and visual ecology, raptors and owls have relatively large eyes for their body mass. Behavioural studies show that many avian species focus on distant objects preferentially with their lateral and monocular field of vision, and birds will orientate themselves sideways to maximise visual resolution. For a pigeon, resolution is twice as good with sideways monocular vision than forward binocular vision, whereas for humans the converse is true. The performance of the eye in low light levels depends on the distance between the lens and the retina, and small birds are effectively forced to be diurnal because their eyes are not large enough to give adequate night vision. Although many species migrate at night, they often collide with even brightly lit objects like lighthouses or oil platforms. Birds of prey are diurnal because, although their eyes are large, they are optimised to give maximum spatial resolution rather than light gathering, so they also do not function well in poor light. Many birds have an asymmetry in the eye's structure which enables them to keep the horizon and a significant part of the ground in focus simultaneously. The cost of this adaptation is that they have myopia in the lower part of their field of view. Birds with relatively large eyes compared to their body mass, such as Common Redstarts and European Robins sing earlier at dawn than birds of the same size and smaller body mass. However, if birds have the same eye size but different body masses, the larger species sings later than the smaller. This may be because the smaller bird has to start the day earlier because of weight loss overnight. Nocturnal birds have eyes optimised for visual sensitivity, with large corneas relative to the eye’s length, whereas diurnal birds have longer eyes relative to the corneal diameter to give greater visual acuity. Information about the activities of extinct species can be deduced from measurements of the sclerotic ring and orbit depth. For the latter measurement to be made, the fossil must have retained its three-dimensional shape, so activity pattern cannot be determined with confidence from flattened specimens like Archaeopteryx, which has a complete sclerotic ring but no orbit depth measurement. Anatomy of the eye The main structures of the bird eye are similar to those of other vertebrates. The outer layer of the eye consists of the transparent cornea at the front, and two layers of sclera — a tough white collagen fibre layer which surrounds the rest of the eye and supports and protects the eye as a whole. The eye is divided internally by the lens into two main segments: the anterior segment and the posterior segment. The anterior chamber is filled with a watery fluid called the aqueous humour, and the posterior chamber contains the vitreous humour, a clear jelly-like substance. The lens is a transparent convex or 'lens' shaped body with a harder outer layer and a softer inner layer. It focuses the light on the retina. The shape of the lens can be altered by ciliary muscles which are directly attached to lens capsule by means of the zonular fibres. In addition to these muscles, some birds also have a second set, Crampton’s muscles, that can change the shape of the cornea, thus giving birds a greater range of accommodation than is possible for mammals. This accommodation can be rapid in some diving water birds such as in the mergansers. The iris is a coloured muscularly operated diaphragm in front of the lens which controls the amount of light entering the eye. At the centre of the iris is the pupil, the variable circular area through which the light passes into the eye. The retina is a relatively smooth curved multi-layered structure containing the photosensitive rod and cone cells with the associated neurons and blood vessels. The density of the photoreceptors is critical in determining the maximum attainable visual acuity. Humans have about 200,000 receptors per mm2, but the House Sparrow has 400,000 and the Common Buzzard 1,000,000. The photoreceptors are not all individually connected to the optic nerve, and the ratio of nerve ganglia to receptors is important in determining resolution. This is very high for birds; the White Wagtail has 100,000 ganglion cells to 120,000 photoreceptors. Rods are more sensitive to light, but give no colour information, whereas the less sensitive cones enable colour vision. In diurnal birds, 80% of the receptors may be cones (90% in some swifts) whereas nocturnal owls have almost all rods. As with other vertebrates except placental mammals, some of the cones may be double structures. These can amount to 50% of all cones in some species. Towards the centre of the retina is the fovea which has a greater density of receptors and is the area of greatest forward visual acuity, i.e. sharpest, clearest detection of objects. In 54% of birds, including birds of prey, kingfishers, hummingbirds and swallows, there is second fovea for enhanced sideways viewing. The optic nerve is a bundle of nerve fibres which carry messages from the eye to the relevant parts of the brain and vice-versa. Like mammals, birds have a small blind spot without photoreceptors at the optic disc, under which the optic nerve and blood vessels join the eye. The pecten is a poorly understood body consisting of folded tissue which projects from the retina. It is well supplied with blood vessels and appears to keep the retina supplied with nutrients, and may also shade the retina from dazzling light or aid in detecting moving objects. Pecten oculi is abundantly filled with melanin granules which have been proposed to absorb stray light entering the bird eye to reduce background glare. Slight warming of pecten oculi due to absorption of light by melanin granules has been proposed enhance metabolic rate of pecten that is suggested to help increase secretion of nutrients into vitreous, eventually to be absorbed by avascular retina of birds for improved nutrition. Exra-high enzymic activity of alkaline phosphatase in pecten oculi has been proposed to support high secretory activity of pecten to supplement nutrition of retina. The choroid is a layer situated behind the retina which contains many small arteries and veins. These provide arterial blood to the retina and drain venous blood. The choroid contains melanin, a pigment which gives the inner eye its dark colour, helping to prevent disruptive reflections. There are two sorts of light receptors in a bird’s eye, rods and cones. Rods, which contain the visual pigment rhodopsin are better for night vision because they are sensitive to small quantities of light. Cones detect specific colours (or wavelengths) of light, so they are more important to colour-orientated animals such as birds. Most birds are tetrachromatic, possessing four types of cone cells each with a distinctive maximal absorption peak. In some birds, the maximal absorption peak of the cone cell responsible for the shortest wavelength extends to the ultraviolet (UV) range, making them UV-sensitive. Pigeons probably have an additional pigment and therefore might be pentachromatic. The four spectrally distinct cone pigments are derived from the protein opsin, linked to a small molecule called retinal, which is closely related to vitamin A. When the pigment absorbs light the retinal changes shape and alters the membrane potential of the cone cell affecting neurones in the ganglia layer of the retina. Each neurone in the ganglion layer may processes information from a number of photoreceptor cells, and may in turn trigger a nerve impulse to relay information along the optic nerve for further processing in specialised visual centres in the brain. The more intense a light, the more photons are absorbed by the visual pigments, the greater the excitation of each cone, and the brighter the light appears. By far the most abundant cone pigment in every bird species examined is the long-wavelength form of iodopsin, which absorbs at wavelengths near 570 nm. This is roughly the spectral region occupied by the red- and green-sensitive pigments in the primate retina, and this visual pigment dominates the colour sensitivity of birds. In penguins, this pigment appears to have shifted its absorption peak to 543 nm, presumably an adaptation to a blue aquatic environment. The information conveyed by a single cone is limited: by itself, the cell cannot tell the brain which wavelength of light caused its excitation. A visual pigment may absorb two wavelengths equally, but even though their photons are of different energies, the cone cannot tell them apart, because they both cause the retinal to change shape and thus trigger the same impulse. For the brain to see colour, it must compare the responses of two or more classes of cones containing different visual pigments, so the four pigments in birds give increased discrimination. Each cone of a bird or reptile contains a coloured oil droplet; these no longer exist in mammals. The droplets, which contain high concentrations of carotenoids, are placed so that light passes through before reaching the visual pigment. They act as filters, removing some wavelengths and narrowing the absorption spectra of the pigments. This reduces the response overlap between pigments and increases the number of colours that a bird can discern. Six types of cone oil droplets have been identified; five of these have carotenoid mixtures that absorb at different wavelengths and intensities, and the sixth type has no pigments. The cone pigments with the lowest maximal absorption peak, including those that are UV-sensitive, possess the 'clear' or 'transparent' type of oil droplets with little spectral tuning effect. The colours and distributions of retinal oil droplets vary considerably among species, and is more dependent on the ecological niche utilised (hunter, fisher, herbivore) than genetic relationships. As examples, diurnal hunters like the Barn Swallow and birds of prey have few coloured droplets, whereas the surface fishing Common Tern has a large number of red and yellow droplets in the dorsal retina. The evidence suggests that oil droplets respond to natural selection faster than the cone's visual pigments. Even within the range of wavelengths that are visible to humans, passerine birds can detect colour differences that humans do not register. This finer discrimination, together with the ability to see ultraviolet light, means that many species show sexual dichromatism that is visible to birds but not humans. Migratory songbirds use the Earth’s magnetic field, stars, the Sun, and polarised light patterns to determine their migratory direction. An American study showed that migratory Savannah Sparrows used polarised light from an area of sky near the horizon to recalibrate their magnetic navigation system at both sunrise and sunset. This suggested that skylight polarisation patterns are the primary calibration reference for all migratory songbirds. However, it appears that birds may be responding to secondary indicators of the angle of polarisation, and may not be actually capable of directly detecting polarisation direction in the absence of these cues. Some birds can perceive ultraviolet light, which is involved in courtship. Many birds show plumage patterns in ultraviolet that are invisible to the human eye; some birds whose sexes appear similar to the naked eye are distinguished by the presence of ultraviolet reflective patches on their feathers. Male Blue Tits have an ultraviolet reflective crown patch which is displayed in courtship by posturing and raising of their nape feathers. Male Blue Grosbeaks with the brightest and most UV-shifted blue in their plumage are larger, hold the most extensive territories with abundant prey, and feed their offspring more frequently than other males do. The bill’s appearance is important in the interactions of the Blackbird. Although the UV component seems unimportant in interactions between territory-holding males, where the degree of orange is the main factor, the female responds more strongly to males with bills with good UV-reflectiveness. A UV receptor may give an animal an advantage in foraging for food. The waxy surfaces of many fruits and berries reflect UV light that might advertise their presence. Common Kestrels are able to locate the trails of voles visually. These small rodents lay scent trails of urine and faeces that reflect UV light, making them visible to the kestrels, particularly in the spring before the scent marks are covered by vegetation. Birds can resolve rapid movements better than humans, for whom flickering at a rate greater than 50 Hz appears as continuous movement. Humans cannot therefore distinguish individual flashes of a fluorescent light bulb oscillating at 60 Hz, but Budgerigars and chickens have flicker thresholds of more than 100 Hz. A Cooper's Hawk can pursue agile prey through woodland and avoid branches and other objects at high speed; to humans such a chase would appear as a blur. Birds can also detect slow moving objects. The movement of the sun and the constellations across the sky is imperceptible to humans, but detected by birds. The ability to detect these movements allows migrating birds to properly orient themselves. To obtain steady images while flying or when perched on a swaying branch, birds hold the head as steady as possible with compensating reflexes. Maintaining a steady image is especially relevant for birds of prey. Edges and shapes When an object is partially blocked by another, humans unconsciously tend to make up for it and complete the shapes (See Amodal perception). It has however been demonstrated that pigeons do not complete occluded shapes. A study based on altering the grey level of a perch that was coloured differently from the background showed that budgerigars do not detect edges based on colours. The perception of magnetic fields by migratory birds has been suggested to be light dependent. Birds move their head to detect the orientation of the magnetic field, and studies on the neural pathways have suggested that birds may be able to "see" the magnetic fields. The right eye of a migratory bird contains photoreceptive proteins called cryptochromes. Light excites these molecules to produce unpaired electrons that interact with the Earth's magnetic field, thus providing directional information. Variations across bird groups Diurnal birds of prey The visual ability of birds of prey is legendary, and the keenness of their eyesight is due to a variety of factors. Raptors have large eyes for their size, 1.4 times greater than the average for birds of the same weight, and the eye is tube-shaped to produce a larger retinal image. The retina has a large number of receptors per square millimeter, which determines the degree of visual acuity. The more receptors an animal has, the higher its ability to distinguish individual objects at a distance, especially when, as in raptors, each receptor is typically attached to a single ganglion. Many raptors have foveas with far more rods and cones than the human fovea (65,000/mm2 in American Kestrel, 38,000 in humans) and this provides these birds with spectacular long distance vision. The fovea itself can also be lens-shaped, increasing the effective density of receptors further. This combination of factors gives Buteo buzzards distance vision 6 to 8 times better than humans. The forward-facing eyes of a bird of prey give binocular vision, which is assisted by a double fovea. The raptor's adaptations for optimum visual resolution (an American Kestrel can see a 2–mm insect from the top of an 18–m tree) has a disadvantage in that its vision is poor in low light level, and it must roost at night. Raptors may have to pursue mobile prey in the lower part of their visual field, and therefore do not have the lower field myopia adaptation demonstrated by many other birds. Scavenging birds like vultures do not need such sharp vision, so a condor has only a single fovea with about 35,000 receptors mm2. Vultures, however have high physiological activity of many important enzymes to suit their distant clarity of vision Raptors lack coloured oil drops in the cones, and probably have similar colour perception to humans, and lack the ability to detect polarised light. The generally brown, grey and white plumage of this group, and the absence of colour displays in courtship suggests that colour is relatively unimportant to these birds. In most raptors a prominent eye ridge and its feathers extends above and in front of the eye. This "eyebrow" gives birds of prey their distinctive stare. The ridge physically protects the eye from wind, dust, and debris and shields it from excessive glare. The Osprey lacks this ridge, although the arrangement of the feathers above its eyes serves a similar function; it also possesses dark feathers in front of the eye which probably serve to reduce the glare from the water surface when the bird is hunting for its staple diet of fish. Owls have very large eyes for their size, 2.2 times greater than the average for birds of the same weight, and positioned at the front of the head. The eyes have a field overlap of 50–70%, giving better binocular vision than for diurnal birds of prey (overlap 30–50%). The Tawny Owl's retina has about 56,000 light-sensitive rods per square millimetre (36 million per square inch); although earlier claims that it could see in the infrared part of the spectrum have been dismissed. Adaptations to night vision include the large size of the eye, its tubular shape, large numbers of closely packed retinal rods, and an absence of cones, since cone cells are not sensitive enough for a low-photon nighttime environment. There are few coloured oil drops, which would reduce the light intensity, but the retina contains a reflective layer, the tapetum lucidum. This increases the amount of light each photosensitive cell receives, allowing the bird to see better in low light conditions. Owls normally have only one fovea, and that is poorly developed except in diurnal hunters like the Short-eared Owl. Besides owls, bat hawks, frogmouths and nightjars also display good night vision. Some bird species nest deep in cave systems which are too dark for vision, and find their way to the nest with a simple form of echolocation. The Oilbird is the only nocturnal bird to echolocate, but several Aerodramus swiftlets also utilise this technique, with one species, Atiu Swiftlet, also using echolocation outside its caves. Seabirds such as terns and gulls that feed at the surface or plunge for food have red oil droplets in the cones of their retinas. This improves contrast and sharpens distance vision, especially in hazy conditions. Birds that have to look through an air/water interface have more deeply coloured carotenoid pigments in the oil drops than other species. Birds that fish by stealth from above the water have to correct for refraction particularly when the fish are observed at an angle. Reef Herons and Little Egrets appear to be able to make the corrections needed when capturing fish and are more successful in catching fish when strikes are made at an acute angle and this higher success may be due to the inability of the fish to detect their predators. Other studies indicate that egrets work within a preferred angle of strike and that the probability of misses increase when the angle becomes too far from the vertical leading to an increased difference between the apparent and real depth of prey. Birds that pursue fish under water like auks and divers have far fewer red oil droplets, but they have special flexible lenses and use the nictitating membrane as an additional lens. This allows greater optical accommodation for good vision in air and water. Cormorants have a greater range of visual accommodation, at 50 dioptres, than any other bird, but the kingfishers are considered to have the best all-round (air and water) vision. Tubenosed seabirds, which come ashore only to breed and spend most of their life wandering close to the surface of the oceans, have a long narrow area of visual sensitivity on the retina This region, the area giganto cellularis, has been found in the Manx Shearwater, Kerguelen Petrel, Great Shearwater, Broad-billed Prion and Common Diving-petrel. It is characterised by the presence of ganglion cells which are regularly arrayed and larger than those found in the rest of the retina, and morphologically appear similar to the cells of the retina in cats. The location and cellular morphology of this novel area suggests a function in the detection of items in a small binocular field projecting below and around the bill. It is not concerned primarily with high spatial resolution, but may assist in the detection of prey near the sea surface as a bird flies low over it. The Manx Shearwater, like many other seabirds, visits its breeding colonies at night to reduce the chances of attack by aerial predators. Two aspects of its optical structure suggest that the eye of this species is adapted to vision at night. In the shearwater's eyes the lens does most of the bending of light necessary to produce a focused image on the retina. The cornea, the outer covering of the eye, is relative flat and so of low refractive power. In a diurnal bird like the pigeon, the reverse is true; the cornea is highly curved and is the principal refractive component. The ration of refraction by the lens to that by the cornea is 1.6 for the shearwater and 0.4 for the pigeon; the figure for the shearwater is consistent with that for a range of different nocturnal bird and mammal. The shorter focal length of shearwater eyes give them a smaller, but brighter, image than is the case for pigeons, so the latter has sharper daytime vision. Although the Manx Shearwater has adaptations for night vision, the effect is small, and it is likely that these birds also use smell and hearing to locate their nests. It used to be thought that penguins were short-sighted on land. Although the cornea is flat and adapted to swimming underwater, the lens is very strong and can compensate for the reduced corneal focusing when out of water. Almost the opposite solution is used by the Hooded Merganser which can bulge part of the lens through the iris when submerged. - Güntürkün, Onur, "Structure and functions of the eye" in Sturkie (1998) 1–18 - Sinclair (1985) 88–100 - White, Craig R.; Day, N; Butler, PJ; Martin, GR; Bennett, Peter (July 2007). "Vision and Foraging in Cormorants: More like Herons than Hawks?" (PDF). In Bennett, Peter. PLoS ONE 2 (7): e639. doi:10.1371/journal.pone.0000639. PMC 1919429. PMID 17653266. - Martin, Graham R.; Katzir, G (1999). "Visual fields in short-toed eagles, Circaetus gallicus (Accipitridae), and the function of binocularity in birds". Brain, Behaviour and Evolution 53 (2): 55–66. doi:10.1159/000006582. PMID 9933782. - Jones, Michael P; Pierce Jr, Kenneth E.; Ward, Daniel (April 2007). "Avian vision: a review of form and function with special consideration to birds of prey" (PDF). Journal of Exotic Pet Medicine 16 (2): 69–87. doi:10.1053/j.jepm.2007.03.012. - Williams, David L.; Flach, E (March 2003). "Symblepharon with aberrant protrusion of the nictitating membrane in the snowy owl (Nyctea scandiaca)" (PDF). Veterinary Ophthalmology 6 (1): 11–13. doi:10.1046/j.1463-5224.2003.00250.x. PMID 12641836. - Gill, Frank (1995). Ornithology. New York: WH Freeman and Co. ISBN 0-7167-2415-4. OCLC 30354617. - Beebe, C. William (1906). The bird: its form and function. Henry Holt & Co, New York. p. 214. - Brooke, M. de L.; Hanley, S.; Laughlin, S. B. (February 1999). "The scaling of eye size with body mass in birds". Proceedings of the Royal Society B 266 (1417): 405–412. doi:10.1098/rspb.1999.0652. PMC 1689681. - Martin, Graham. "Producing the image" in Ziegler & Bischof (1993) 5–24 - Thomas, Robert J.; Suzuki, M; Saito, S; Tanda, S; Newson, Stuart E.; Frayling, Tim D.; Wallis, Paul D. (2002). "Eye size in birds and the timing of song at dawn". Proceedings of the Royal Society B 269 (1493): 831–837. doi:10.1098/rspb.2001.1941. PMC 1690967. PMID 11958715. - Hall, Margaret I. (June 2008). "The anatomical relationships between the avian eye, orbit and sclerotic ring: implications for inferring activity patterns in extinct birds". Journal of Anatomy 212 (6): 781–794. doi:10.1111/j.1469-7580.2008.00897.x. PMC 2423400. PMID 18510506. - Sivak, Jacob G. (2004). "Through the Lens Clearly: Phylogeny and Development". Invest. Ophthalmol. Vis. Sci. 45 (3): 740–747. doi:10.1167/iovs.03-0466. PMID 14985284. - Nalbach Hans-Ortwin; Wolf-Oberhollenzer, Friedericke; Remy Monika. "Exploring the image" in Ziegler & Bischof (1993) 26–28 - Bawa S.R. and YashRoy R.C. (1974) Structure and function of vulture pecten. Cells Tissues Organs (Earlier name: Acta Anatomica), vol 89, pp. 473-480. https://www.researchgate.net/publication/231569868_Structure_and_function_of_vulture_pecten?ev=prf_pub - Bawa S.R. and YashRoy R.C. (1972) Effect of dark and light adaptation on the retina and pecten of chicken. Experimental Eye Research, vol. 13, pp.92-97. https://www.researchgate.net/publication/18108932_Effect_of_dark_and_light_adaptation_on_the_retina_and_pecten_of_chicken?ev=prf_pub - Hart, NS; Partridge, J.C.; Bennett, A.T.D.; Cuthill, I.C. (2000). "Visual pigments, cone oil droplets and ocular media in four species of estrildid finch" (PDF[dead link]). Journal of Comparative Physiology A 186 (7–8): 681–694. doi:10.1007/s003590000121. - The effect of the coloured oil droplets is to narrow and shift the absorption peak for each pigment. The absorption peaks without the oil droplets would be broader and less peaked, but these are not shown here. - Goldsmith, Timothy H. (July 2006). "What birds see" (PDF[dead link]). Scientific American: 69–75. - Wilkie, Susan E.; Vissers, PM; Das, D; Degrip, WJ; Bowmaker, JK; Hunt, DM (1998). "The molecular basis for UV vision in birds: spectral characteristics, cDNA sequence and retinal localization of the UV-sensitive visual pigment of the budgerigar (Melopsittacus undulatus)". Biochemical Journal 330 (Pt 1): 541–47. PMC 1219171. PMID 9461554. - Varela, F. J.; Palacios, A. G.; Goldsmith T. M. "Color vision of birds" in Ziegler & Bischof (1993) 77–94 - Jacky Emmerton, Juan D. Delhis (Mar 1980). "Wavelength discrimination in the `visible' and ultraviolet spectrum by pigeons". Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology 141 (1). - Emmerton, J. A. (1975). The colour vision of the pigeon (Ph.D.). Durham E-Theses. Durham University. - Bowmaker, J. K.; Martin, G. R. (January 1985). "Visual pigments and oil droplets in the penguin, Spheniscus humbolti". Journal of Comparative Physiology 156 (1): 71–77. doi:10.1007/BF00610668. - Goldsmith, T. H.; Collins, JS; Licht, S (1984). "The cone oil droplets of avian retinas". Vision Research. 24 (11): 1661–1671. doi:10.1016/0042-6989(84)90324-9. PMID 6533991. - Vorobyev, M.; Osorio, D.; Bennett, A. T. D.; Marshall, N. J.; Cuthill, I. C. (3 July 1998). "Tetrachromacy, oil droplets and bird plumage colours". Journal of Comparative Physiology A: Neuroethology Sensory Neural and Behavioral Physiology 183 (5): 621–633. doi:10.1007/s003590050286. - Eaton, Muir D. (August 2005). "Human vision fails to distinguish widespread sexual dichromatism among sexually "monochromatic" birds". Proceedings of the National Academy of Sciences of the United States of America 102 (31): 10942–10946. doi:10.1073/pnas.0501891102. PMC 1182419. PMID 16033870. - Muheim, Rachel; Phillips, JB; Akesson, S (August 2006). "Polarized light cues underlie compass calibration in migratory songbirds" (PDF). Science 313 (5788): 837–839. doi:10.1126/science.1129709. PMID 16902138. - Greenwood, Verity J.; Smith, EL; Church, SC; Partridge, JC (2003). "Behavioural investigation of polarisation sensitivity in the Japanese quail (Coturnix coturnix japonica) and the European starling (Sturnus vulgaris)". The Journal of Experimental Biology 206 (Pt 18): 3201–3210. doi:10.1242/jeb.00537. PMID 12909701. - Carvalho, L. S.; Cowling, J. A.; Wilkie, S. E.; Bowmaker, J. K.; Hunt, D. M. (2007). "The molecular evolution of avian ultraviolet- and violet-sensitive visual pigments". Molecular Biology and Evolution 24 (8): 1843–52. doi:10.1093/molbev/msm109. - Andersson, S.; J. Ornborg & M. Andersson (1998). "Ultraviolet sexual dimorphism and assortative mating in blue tits". Proceedings of the Royal Society B 265 (1395): 445–50. doi:10.1098/rspb.1998.0315. - Bright, Ashleigh.; Waas, Joseph R. (August 2002). "Effects of bill pigmentation and UV reflectance during territory establishment in blackbirds" (PDF). Animal Behaviour 64 (2): 207–213. doi:10.1006/anbe.2002.3042. - Viitala, Jussi; Korplmäki, Erkki; Palokangas, Pälvl; Koivula, Minna (1995). "Attraction of kestrels to vole scent marks visible in ultraviolet light". Nature 373 (6513): 425–27. doi:10.1038/373425a0. - Sekuler A B, Lee J A J, Shettleworth S J (1996). "Pigeons do not complete partly occluded figures". Perception 25 (9): 1109–1120. doi:10.1068/p251109. PMID 8983050. - Bhagavatula P, Claudianos C, Ibbotson M, Srinivasan M (2009). "Edge Detection in Landing Budgerigars (Melopsittacus undulatus)". In Warrant, Eric. PLoS ONE 4 (10): e7301. doi:10.1371/journal.pone.0007301. PMC 2752810. PMID 19809500. - Mouritsen, Henrik; Gesa Feenders, Miriam Liedvogel, Kazuhiro Wada, and Erich D. Jarvis (2005). "Night-vision brain area in migratory songbirds". PNAS 102 (23): 8339–8344. doi:10.1073/pnas.0409575102. PMC 1149410. PMID 15928090. - Mouritsen, H.; Feenders, G; Liedvogel, M; Kropp, W (2004). "Migratory birds use head scans to detect the direction of the Earth's magnetic field". Current Biology 14 (21): 1946–1949. doi:10.1016/j.cub.2004.10.025. PMID 15530397. - Heyers D, Manns M, Luksch H, Güntürkün O, Mouritsen H (2007). "A Visual Pathway Links Brain Structures Active during Magnetic Compass Orientation in Migratory Birds". In Iwaniuk, Andrew. PLoS ONE 2 (9): e937. doi:10.1371/journal.pone.0000937. PMC 1976598. PMID 17895978. - Shanor, Karen; Kanwal, Jagmeet (2009). Bats sing, mice giggle: revealing the secret lives of animals. Icon Books. p. 25. ISBN 1-84831-071-4. (Despite its title, this is written by professional scientists with many references) - Heyers, Dominik; Manns, M; Luksch, H; Güntürkün, O; Mouritsen, H; Iwaniuk, Andrew (September 2007). "A Visual Pathway Links Brain Structures Active during Magnetic Compass Orientation in Migratory Birds". In Iwaniuk, Andrew. PLoS ONE 2 (9): e937. doi:10.1371/journal.pone.0000937. PMC 1976598. PMID 17895978. Retrieved 2007-09-27. - Schematic diagram of retina of right eye, loosely based on Sturkie (1998) 6 - Bawa S.R. and YashRoy R.C. Vulture retina enzyme distribution and function. Neurobiology, vol. 2, 162-168. https://www.researchgate.net/publication/232724329_Vulture_retina_enzyme_distribution_and_function?ev=prf_pub - Burton (1985) 44–48 - Hecht, Selig; Pirenne, MH (1940). "The sensibility of the nocturnal Long-Eared Owl in the spectrum" (Automatic PDF download). Journal of General Physiology 23 (6): 709–717. doi:10.1085/jgp.23.6.709. PMC 2237955. PMID 19873186. - Cleere, Nigel; Nurney, David (1998). Nightjars: A Guide to the Nightjars, Frogmouths, Potoos, Oilbird and Owlet-nightjars of the World. Pica / Christopher Helm. p. 7. ISBN 1-873403-48-8. OCLC 39882046. - Fullard, J. H.; Barclay, .; Thomas (1993). "Echolocation in free-flying Atiu Swiftlets (Aerodramus sawtelli)" (PDF). Biotropica 25 (3): 334–339. doi:10.2307/2388791. JSTOR 2388791. Retrieved 12 July 2008. - Konishi, M.; Knudsen, EI (April 1979). "The oilbird: hearing and echolocation". Science 204 (4391): 425–427. doi:10.1126/science.441731. PMID 441731. - Lythgoe, J. N. (1979). The Ecology of Vision. Oxford: Clarendon Press. pp. 180–183. ISBN 0-19-854529-0. OCLC 4804801. - Lotem A, Schechtman E & G Katzir (1991). "Capture of submerged prey by little egrets, Egretta garzetta garzetta: strike depth, strike angle and the problem of light refraction" (pdf). Anim. Behav. 42 (3): 341–346. doi:10.1016/S0003-3472(05)80033-8. - Katzir, Gadi; Lotem, Arnon; Intrator, Nathan (1989). "Stationary underwater prey missed by reef herons, Egretta gularis: head position and light refraction at the moment of strike". J. Comparative Physiology A 165: 573–576. doi:10.1007/BF00611243. - Hayes, Brian; Martin, Graham R.; Brooke, Michael de L. (1991). "Novel area serving binocular vision in the retinae of procellariiform seabirds". Brain, Behavior and Evolution 37 (2): 79–84. doi:10.1159/000114348. - Martin, Graham R.; Brooke, M. de L. (1991). "The Eye of a Procellariiform Seabird, the Manx Shearwater, Puffinus puffinus: Visual Fields and Optical Structure". Brain, Behaviour and Evolution 37 (2): 65–78. doi:10.1159/000114347. - Burton, Robert (1985). Bird Behaviour. London: Granada Publishing. ISBN 0-246-12440-7. - Sinclair, Sandra (1985). How Animals See: Other Visions of Our World. Beckenham, Kent: Croom Helm. ISBN 0-7099-3336-3. - Sturkie, P. D. (1998). Sturkie's Avian Physiology. 5th Edition. Academic Press, San Diego. ISBN 0-12-747605-9. OCLC 162128712 191850007 43947653. - Ziegler, Harris Philip; Bischof, Hans-Joachim (eds) (1993). Vision, Brain, and Behavior in Birds: A comparative review. MIT Press. ISBN 0-262-24036-X. OCLC 27727176. - Dr. Robert G. Cook, ed. (2001). Avian Visual Cognition (cyberbook). Tufts University; in cooperation with Comparative Cognition Press.
This article needs additional citations for verification. (May 2012) (Learn how and when to remove this template message) A critical mass is the smallest amount of fissile material needed for a sustained nuclear chain reaction. The critical mass of a fissionable material depends upon its nuclear properties (specifically, the nuclear fission cross-section), its density, its shape, its enrichment, its purity, its temperature, and its surroundings. The concept is important in nuclear weapon design. Explanation of criticalityEdit When a nuclear chain reaction in a mass of fissile material is self-sustaining, the mass is said to be in a critical state in which there is no increase or decrease in power, temperature, or neutron population. A numerical measure of a critical mass is dependent on the effective neutron multiplication factor k, the average number of neutrons released per fission event that go on to cause another fission event rather than being absorbed or leaving the material. When k = 1, the mass is critical, and the chain reaction is self-sustaining. A subcritical mass is a mass of fissile material that does not have the ability to sustain a fission chain reaction. A population of neutrons introduced to a subcritical assembly will exponentially decrease. In this case, k < 1. A steady rate of spontaneous fissions causes a proportionally steady level of neutron activity. The constant of proportionality increases as k increases. A supercritical mass is one in which, once fission has started, it will proceed at an increasing rate. The material may settle into equilibrium (i.e. become critical again) at an elevated temperature/power level or destroy itself. In the case of supercriticality, k > 1. Due to spontaneous fission a supercritical mass will undergo a chain reaction. For example, a spherical critical mass of pure uranium-235 will have a mass of 52 kg and will experience around 15 spontaneous fission events per second. The probability that one such event will cause a chain reaction depends on how much the mass exceeds the critical mass. If there is uranium-238 present, the rate of spontaneous fission will be much higher. Fission can also be initiated by neutrons produced by cosmic rays. Changing the point of criticalityEdit The mass where criticality occurs may be changed by modifying certain attributes such as fuel, shape, temperature, density and the installation of a neutron-reflective substance. These attributes have complex interactions and interdependencies. These examples only outline the simplest ideal cases: Varying the amount of fuelEdit It is possible for a fuel assembly to be critical at near zero power. If the perfect quantity of fuel were added to a slightly subcritical mass to create an "exactly critical mass", fission would be self-sustaining for only one neutron generation (fuel consumption then makes the assembly subcritical again). If the perfect quantity of fuel were added to a slightly subcritical mass, to create a barely supercritical mass, the temperature of the assembly would increase to an initial maximum (for example: 1 K above the ambient temperature) and then decrease back to the ambient temperature after a period of time, because fuel consumed during fission brings the assembly back to subcriticality once again. Changing the shapeEdit A mass may be exactly critical without being a perfect homogeneous sphere. More closely refining the shape toward a perfect sphere will make the mass supercritical. Conversely changing the shape to a less perfect sphere will decrease its reactivity and make it subcritical. Changing the temperatureEdit A mass may be exactly critical at a particular temperature. Fission and absorption cross-sections increase as the relative neutron velocity decreases. As fuel temperature increases, neutrons of a given energy appear faster and thus fission/absorption is less likely. This is not unrelated to Doppler broadening of the 238U resonances but is common to all fuels/absorbers/configurations. Neglecting the very important resonances, the total neutron cross-section of every material exhibits an inverse relationship with relative neutron velocity. Hot fuel is always less reactive than cold fuel (over/under moderation in LWR is a different topic). Thermal expansion associated with temperature increase also contributes a negative coefficient of reactivity since fuel atoms are moving farther apart. A mass that is exactly critical at room temperature would be sub-critical in an environment anywhere above room temperature due to thermal expansion alone. Varying the density of the massEdit The higher the density, the lower the critical mass. The density of a material at a constant temperature can be changed by varying the pressure or tension or by changing crystal structure (see allotropes of plutonium). An ideal mass will become subcritical if allowed to expand or conversely the same mass will become supercritical if compressed. Changing the temperature may also change the density; however, the effect on critical mass is then complicated by temperature effects (see "Changing the temperature") and by whether the material expands or contracts with increased temperature. Assuming the material expands with temperature (enriched uranium-235 at room temperature for example), at an exactly critical state, it will become subcritical if warmed to lower density or become supercritical if cooled to higher density. Such a material is said to have a negative temperature coefficient of reactivity to indicate that its reactivity decreases when its temperature increases. Using such a material as fuel means fission decreases as the fuel temperature increases. Use of a neutron reflectorEdit Surrounding a spherical critical mass with a neutron reflector further reduces the mass needed for criticality. A common material for a neutron reflector is beryllium metal. This reduces the number of neutrons which escape the fissile material, resulting in increased reactivity. Use of a tamperEdit In a bomb, a dense shell of material surrounding the fissile core will contain, via inertia, the expanding fissioning material. This increases the efficiency. A tamper also tends to act as a neutron reflector. Because a bomb relies on fast neutrons (not ones moderated by reflection with light elements, as in a reactor), the neutrons reflected by a tamper are slowed by their collisions with the tamper nuclei, and because it takes time for the reflected neutrons to return to the fissile core, they take rather longer to be absorbed by a fissile nucleus. But they do contribute to the reaction, and can decrease the critical mass by a factor of four. Also, if the tamper is (e.g. depleted) uranium, it can fission due to the high energy neutrons generated by the primary explosion. This can greatly increase yield, especially if even more neutrons are generated by fusing hydrogen isotopes, in a so-called boosted configuration. The critical size is the minimum size of a nuclear reactor core or nuclear weapon that can be made for a specific geometrical arrangement and material composition. The critical size must at least include enough fissionable material to reach critical mass. If the size of the reactor core is less than a certain minimum, too many fission neutrons escape through its surface and the chain reaction is not sustained. Critical mass of a bare sphereEdit The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the following table. Most information on bare sphere masses is considered classified, since it is critical to nuclear weapons design, but some documents have been declassified. The critical mass for lower-grade uranium depends strongly on the grade: with 20% 235U it is over 400 kg; with 15% 235U, it is well over 600 kg. The critical mass is inversely proportional to the square of the density. If the density is 1% more and the mass 2% less, then the volume is 3% less and the diameter 1% less. The probability for a neutron per cm travelled to hit a nucleus is proportional to the density. It follows that 1% greater density means that the distance travelled before leaving the system is 1% less. This is something that must be taken into consideration when attempting more precise estimates of critical masses of plutonium isotopes than the approximate values given above, because plutonium metal has a large number of different crystal phases which can have widely varying densities. Note that not all neutrons contribute to the chain reaction. Some escape and others undergo radiative capture. Let q denote the probability that a given neutron induces fission in a nucleus. Let us consider only prompt neutrons, and let ν denote the number of prompt neutrons generated in a nuclear fission. For example, ν ≈ 2.5 for uranium-235. Then, criticality occurs when ν·q = 1. The dependence of this upon geometry, mass, and density appears through the factor q. Given a total interaction cross section σ (typically measured in barns), the mean free path of a prompt neutron is where n is the nuclear number density. Most interactions are scattering events, so that a given neutron obeys a random walk until it either escapes from the medium or causes a fission reaction. So long as other loss mechanisms are not significant, then, the radius of a spherical critical mass is rather roughly given by the product of the mean free path and the square root of one plus the number of scattering events per fission event (call this s), since the net distance travelled in a random walk is proportional to the square root of the number of steps: Note again, however, that this is only a rough estimate. In terms of the total mass M, the nuclear mass m, the density ρ, and a fudge factor f which takes into account geometrical and other effects, criticality corresponds to which clearly recovers the aforementioned result that critical mass depends inversely on the square of the density. Alternatively, one may restate this more succinctly in terms of the areal density of mass, Σ: where the factor f has been rewritten as f' to account for the fact that the two values may differ depending upon geometrical effects and how one defines Σ. For example, for a bare solid sphere of 239Pu criticality is at 320 kg/m2, regardless of density, and for 235U at 550 kg/m2. In any case, criticality then depends upon a typical neutron "seeing" an amount of nuclei around it such that the areal density of nuclei exceeds a certain threshold. This is applied in implosion-type nuclear weapons where a spherical mass of fissile material that is substantially less than a critical mass is made supercritical by very rapidly increasing ρ (and thus Σ as well) (see below). Indeed, sophisticated nuclear weapons programs can make a functional device from less material than more primitive weapons programs require. Aside from the math, there is a simple physical analog that helps explain this result. Consider diesel fumes belched from an exhaust pipe. Initially the fumes appear black, then gradually you are able to see through them without any trouble. This is not because the total scattering cross section of all the soot particles has changed, but because the soot has dispersed. If we consider a transparent cube of length L on a side, filled with soot, then the optical depth of this medium is inversely proportional to the square of L, and therefore proportional to the areal density of soot particles: we can make it easier to see through the imaginary cube just by making the cube larger. Several uncertainties contribute to the determination of a precise value for critical masses, including (1) detailed knowledge of fission cross sections, (2) calculation of geometric effects. This latter problem provided significant motivation for the development of the Monte Carlo method in computational physics by Nicholas Metropolis and Stanislaw Ulam. In fact, even for a homogeneous solid sphere, the exact calculation is by no means trivial. Finally, note that the calculation can also be performed by assuming a continuum approximation for the neutron transport. This reduces it to a diffusion problem. However, as the typical linear dimensions are not significantly larger than the mean free path, such an approximation is only marginally applicable. Finally, note that for some idealized geometries, the critical mass might formally be infinite, and other parameters are used to describe criticality. For example, consider an infinite sheet of fissionable material. For any finite thickness, this corresponds to an infinite mass. However, criticality is only achieved once the thickness of this slab exceeds a critical value. Criticality in nuclear weapon designEdit Until detonation is desired, a nuclear weapon must be kept subcritical. In the case of a uranium bomb, this can be achieved by keeping the fuel in a number of separate pieces, each below the critical size either because they are too small or unfavorably shaped. To produce detonation, the pieces of uranium are brought together rapidly. In Little Boy, this was achieved by firing a piece of uranium (a 'doughnut') down a gun barrel onto another piece (a 'spike'). This design is referred to as a gun-type fission weapon. A theoretical 100% pure 239Pu weapon could also be constructed as a gun-type weapon, like the Manhattan Project's proposed Thin Man design. In reality, this is impractical because even "weapons grade" 239Pu is contaminated with a small amount of 240Pu, which has a strong propensity toward spontaneous fission. Because of this, a reasonably sized gun-type weapon would suffer nuclear reaction (predetonation) before the masses of plutonium would be in a position for a full-fledged explosion to occur. Instead, the plutonium is present as a subcritical sphere (or other shape), which may or may not be hollow. Detonation is produced by exploding a shaped charge surrounding the sphere, increasing the density (and collapsing the cavity, if present) to produce a prompt critical configuration. This is known as an implosion type weapon. The event of fission must release, on the average, more than one free neutron of the desired energy level in order to sustain a chain reaction, and each must find other nuclei and cause them to fission. Most of the neutrons released from a fission event come immediately from that event, but a fraction of them come later, when the fission products decay, which may be on the average from microseconds to minutes later. This is fortunate for atomic power generation, for without this delay "going critical" would always be an immediately catastrophic event, as it is in a nuclear bomb where upwards of 80 generations of chain reaction occur in less than a microsecond, far too fast for man, or even machine, to react. Physicists recognize two points in the gradual increase of neutron flux which are significant: critical, where the chain reaction becomes self-sustaining thanks to the contributions of both kinds of neutron generation, and prompt critical, where the immediate "prompt" neutrons alone will sustain the reaction without need for the decay neutrons. Nuclear power plants operate between these two points of reactivity, while above the prompt critical point is the domain of nuclear weapons and some nuclear power accidents, such as the Chernobyl disaster. - Serber, Robert, The Los Alamos Primer: The First Lectures on How to Build an Atomic Bomb, (University of California Press, 1992) ISBN 0-520-07576-5 Original 1943 "LA-1", declassified in 1965, plus commentary and historical introduction - Reevaluated Critical Specifications of Some Los Alamos Fast-Neutron Systems - Nuclear Weapons Design & Materials, The Nuclear Threat Initiative website.[dead link][unreliable source?] - Final Report, Evaluation of nuclear criticality safety data and limits for actinides in transport, Republic of France, Institut de Radioprotection et de Sûreté Nucléaire, Département de Prévention et d'étude des Accidents. - Chapter 5, Troubles tomorrow? Separated Neptunium 237 and Americium, Challenges of Fissile Material Control (1999), isis-online.org - http://www.lanl.gov/news/index.php?fuseaction=home.story&story_id=1348[full citation needed] - Updated Critical Mass Estimates for Plutonium-238, U.S. Department of Energy: Office of Scientific & Technical Information - Amory B. Lovins, Nuclear weapons and power-reactor plutonium, Nature, Vol. 283, No. 5750, pp. 817–823, February 28, 1980 - http://typhoon.tokai-sc.jaea.go.jp/icnc2003/Proceeding/paper/6.5_022.pdf[full citation needed] Dias et al. - Hirshi Okuno and Hirumitsu Kawasaki, Technical Report, Critical and Subcritical Mass Calculations for Curium-243 to -247 , Japan National Institute of Informatics, Reprinted from Journal of Nuclear Science and Technology, Vol. 39, No. 10, p.1072–1085 (October 2002)[infringing link?] - Institut de Radioprotection et de Sûreté Nucléaire: "Evaluation of nuclear criticality safety. data and limits for actinides in transport", p. 16 - Carey Sublette, Nuclear Weapons Frequently Asked Questions: Section 6.0 Nuclear Materials February 20, 1999 - Rhodes, Richard (1995). Dark Sun: The Making of the Hydrogen Bomb.In the description of the Soviet equivalent of the CP1 startup at the University of Chicago in 1942, the long waits for those tardy neutrons is described in detail
The ozone hole and global warming are not the same thing, and neither is the main cause of the other. The ozone hole is an area in the stratosphere above Antarctica where chlorine and bromine gases from human-produced chlorofluorocarbons (CFCs) and halons have destroyed ozone molecules. Global warming is the rise in average global surface temperature caused primarily by the build-up of human-produced greenhouses gases, mostly carbon dioxide and methane, which trap heat in the lower levels of the atmosphere. There are some connections between the two phenomena. For example, the CFCs that destroy ozone are also potent greenhouse gases, though they are present in such small concentrations in the atmosphere (several hundred parts per trillion, compared to several hundred parts per million for carbon dioxide) that they are considered a minor player in greenhouse warming. CFCs account for about 13% of the total energy absorbed by human-produced greenhouse gases. The ozone hole itself has a minor cooling effect (about 2 percent of the warming effect of greenhouses gases) because ozone in the stratosphere absorbs heat radiated to space by gases in a lower layer of Earth’s atmosphere (the upper troposphere). The loss of ozone means slightly more heat can escape into space from that region. Global warming is also predicted to have a modest impact on the Antarctic ozone hole. The chlorine gases in the lower stratosphere interact with tiny cloud particles that form at extremely cold temperatures — below -80 degrees Celsius (-112 degrees Fahrenheit). While greenhouse gases absorb heat at a relatively low altitudes and warm the surface, they actually cool the stratosphere. Near the South Pole, this cooling of the stratosphere results in an increase in polar stratospheric clouds, increasing the efficiency of chlorine release into reactive forms that can rapidly deplete ozone. - Allen, Jeannie. (2004, February 10). Tango in the Atmosphere: Ozone and Climate Change. Earth Observatory. Accessed: September 14, 2010. - Baldwin, M.P., Dameris, M., Shepherd, T.G. (2007, June 15). How will the stratosphere affect climate change? Science, 316 (5831), 1576-1577. - Intergovernmental Panel on Climate Change, (2007). Summary for Policymakers. In Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor, and H.L. Miller (eds.)]. Cambridge, United Kingdom, and New York, New York: Cambridge University Press. - NASA. Ozone Hole Watch. Accessed: September 14, 2010. Science suggests that to mitigate the human contribution to global warming, we should reduce carbon dioxide and other greenhouse gas emissions. Because some additional warming is inevitable—even if we achieve significant greenhouse gas reductions quickly—we should make plans to adapt to coming climate change. If we are unable to control emissions and/or adapt to unavoidable changes quickly enough, a carefully selected geoengineering strategy could conceivably provide an emergency stopgap to slow global warming. As yet, however, several of the strategies being discussed are very risky and unproven. Controlling emissions is a large, complex, and potentially expensive problem that no single strategy will solve. On the other hand, the costs of uncontrolled global warming will probably also be significant. Many economists have concluded that putting existing scientific and technological strategies into place and developing new ones may stimulate the economy, and would also generate significant near-term benefits in public health through air pollution reduction. The Carbon Mitigation Initiative, a university and industry partnership based at Princeton University, has identified strategies—based solely on existing technologies—that used in combination over the next 50 years, would keep the amount of carbon dioxide in the atmosphere from more than doubling the pre-industrial level. (Many scientists believe doubled carbon dioxide levels will cause a dangerous interference with the climate.) These strategies are: - Increase the energy efficiency of our cars, homes, and power plants while lowering our consumption by adjusting our thermostats and traveling fewer miles; - Capture the carbon emitted by power plants and store it underground; - Produce more energy from nuclear, natural gas, and renewable fuels—solar, wind, hydroelectric, and bio-fuels; - Halt deforestation and soil degradation worldwide, while reforesting more areas. Some of those strategies will have to be put into place by governments and industry, but individuals can also do a lot on their own. On average, individual Americans emit 19 tons of carbon dioxide annually while driving our cars and heating our homes—more than people in any other country. If we can reduce our personal emissions by just 5 percent, total U.S. emissions would drop by 300 million tons. That reduction could be easily achieved by replacing appliances and light bulbs with more efficient ones, planning our automobile trips more carefully, driving more fuel-efficient cars, taking fewer flights, and so on. By learning about global warming, by communicating with elected officials about the problem, and by making energy-conscious decisions, individuals will play a meaningful role in what must be a global effort to respond to global warming. Adapting to Climate Change Climate has been fluctuating throughout Earth’s history, and recently, humans have become one of the factors contributing to climate change. Changes related to human activity are already being felt. Even if we were to stop greenhouse gas emissions today, additional climate change from emissions already in the atmosphere would be inevitable. For this reason, many governments and industries are beginning to adapt policies, disaster response plans, or infrastructure to prepare for anticipated changes. While some adaptations are difficult and expensive, many are relatively inexpensive and offer immediate benefits. Adaptation strategies vary from region to region, depending on the greatest threat posed by climate change locally. For example, coastal regions facing rising sea levels and increased coastal erosion might eliminate incentives to develop high-risk coastlines and encourage a “living buffer” of sand dunes and forest between the ocean and infrastructure. New York City has already integrated climate change into the process it uses to plan future development, reducing the need for expensive retrofitting later. Local governments may adjust disaster response plans to accommodate changes in weather patterns. The city of Philadelphia recently implemented an emergency response plan to limit the health impact of increasingly frequent heat waves on its population. Philadelphia officials estimate that their heat response plan has already reduced heat-related deaths. More extreme and expensive adaptations may become necessary in some regions. Thawing permafrost and increased storms, windiness and coastal erosion are now putting at least 166 communities at risk in Alaska. Moving each community to safer areas will cost an estimated 30 to 50 million dollars per village, estimates the U.S. Army Corps of Engineers. Six communities have already decided to relocate. For individuals, governments, and businesses, adapting to climate change requires understanding and accepting the risks of regional climate change, assessing the immediate and long-term costs and benefits of adaptation strategies, and implementing adaptations that bring the most benefits relative to the cost and risk. Though risky and unproven, geoengineering could provide another near-term strategy for slowing global warming until carbon emissions can be reduced enough to prevent catastrophic climate change. In this context, geoengineering means deliberately altering the atmosphere, land, or ocean to counter the effects of global warming. Many geoengineering schemes have been proposed, but all can be reduced to two main strategies: reduce the amount of greenhouse gases in the atmosphere (increase the amount of infrared radiation escaping to space) or reduce the amount of solar energy the Earth system absorbs. Two of the most common examples of these geoengineering strategies involve removing carbon from the atmosphere by adding fertilizer to selected regions of the ocean to increase phytoplankton growth and reflecting more sunlight by injecting tiny, non-absorbing particles (aerosols) into the upper atmosphere (stratosphere). While both of these geoengineering examples might counter global warming for a time, they could also have significant drawbacks. Increased fertilizers and/or phytoplankton growth could have unintended consequences on ocean ecosystems, including increased ocean dead zones and toxic blooms. Adding aerosols to the upper atmosphere could modify the chemistry of the upper atmosphere, affecting ozone and thereby having possible unintended impacts on the lower atmosphere. Because the impact of geoengineering on the complex global climate system hasn’t been extensively studied, any large-scale geoengineering strategy could have serious unexpected consequences. As a result, most scientists consider geoengineering only as a last-resort, emergency measure. - America’s Climate Choices. (2010, May). Adapting to the impacts of climate change. National Research Council of the National Academies. Accessed July 16, 2010. - Intergovernmental Panel on Climate Change. (2007). Summary for Policymakers. In: Climate Change 2007: Mitigation of Climate Change Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. - Pacala, S., and Socolow, R. (2004) Solving the Climate Problem for the Next 50 Years with Current Technologies. Science, 305 (5686), 968-972. - Parkinson, C. L. (2010). Coming Climate Crisis? Consider the Past, Beware the Big Fix. Lanham, Maryland: Rowman & Littlefield Publishers. - Robock, A., Marquardt, A., Kravitz, B, and Stenchikov, G. (2009, October 2). Benefits, risks, and costs of stratospheric geoengineering. Geophysical Research Letters, 36, L19703. - The Carbon Mitigation Initiative, is a collaboration between Princeton University, BP, and Ford to find solutions to the global warming problem. - The Energy Star Website, published by the U.S. Department of Energy and the U.S. Environmental Protection Agency provides information for individuals and businesses on making energy-conscious choices. The Intergovernmental Panel on Climate Change stated in their most recent report that global surface temperature at the end of this century will probably be between 1.8 and 4 degrees Celsius warmer than it was at the end of the last century. It’s natural to question whether we and future generations will regret our efforts to reduce greenhouse gas emissions if it turns out global warming isn’t as bad as predicted. But the best science we have to guide us at this time indicates that the chance that warming will be much larger than the best estimate is greater than the chance that it will be much smaller. Climate scientists know that there is plenty they don’t know about the way the Earth system works. Some of the physical processes that models describe are thoroughly well-established—the melting point of ice, for example, and the law of gravity. Other physical processes are less perfectly known: when the air temperature is not far below 0 Celsius, for example, will water vapor condense into liquid or ice? Either is possible, depending on atmospheric conditions. To understand how uncertainty about the underlying physics of the climate system affects climate predictions, scientists have a common test: they have a model predict what the average surface temperature would be if carbon dioxide concentrations were to double pre-industrial levels. They run this simulation thousands of times, each time changing the starting assumptions of one or more processes. When they put all the predictions from these thousands of simulations onto a single graph, what they get is a picture of the most likely outcomes and the least likely outcomes. The pattern that emerges from these types of tests is interesting. Few of the simulations result in less than 2 degrees of warming—near the low end of the IPCC estimates—but some result in significantly more than the 4 degrees at the high end of the IPCC estimates. This pattern (statisticians call it a “right-skewed distribution”) suggests that if carbon dioxide concentrations double, the probability of very large increases in temperature is greater than the probability of very small increases. Our ability to predict the future climate is far from certain, but this type of research suggests that the question of whether global warming will turn out to be less severe than scientists think may be less relevant than whether it may be far worse. - Intergovernmental Panel on Climate Change, Core Writing Team. (2007). Chapter 3: Climate change and its impacts in the near and long term under different scenarios. In Pachauri, R. & Reisinger, A. (Eds.), Climate Change 2007: Synthesis Report. Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Geneva, Switzerland: IPCC. - Ramanathan, V., & Xu, Y. (2010). The Copenhagen Accord for limiting global warming: Criteria, constraints, and available avenues. Proceedings of the National Academy of Sciences, 107(18), 8055. - Realclimate.org. (2007, October 26). The certainty of uncertainty. Accessed June 21, 2010. - Roe, G. H., & Baker, M. B. (2007). Why Is Climate Sensitivity So Unpredictable? Science, 318(5850), 629-632. - Stainforth, D. A., Aina, T., Christensen, C., Collins, M., Faull, N., Frame, D. J., Kettleborough, J. A., et al. (2005). Uncertainty in predictions of the climate response to rising levels of greenhouse gases. NASA employs the world’s largest concentration of climate scientists. NASA’s mission to study Earth involves monitoring atmospheric conditions, global temperatures, land cover and vegetation, ice extent, ocean productivity, and a number of other planetary vital signs with a fleet of space-based sensors. This information is critical in understanding how Earth’s climate works and how it is responding to change. In addition to collecting information about the Earth, NASA also builds global and regional climate models to understand the causes and effects of climate change, including global warming. NASA shares its climate data and information with the public and policy leaders freely and in a timely manner. As part of the U.S. Climate Change Science Program, NASA works with other agencies—including the National Oceanic and Atmospheric Administration, the U.S. Geological Survey, the Environmental Protection Agency, the Department of Energy, and many others—to conduct research and to ensure climate science results are available to all users to address a broad range of societal needs.
Home > Flashcards > Print Preview The flashcards below were created by user on FreezingBlue Flashcards. What would you like to do? Give the colours of solution when chlorine, bromine and iodine is in water. - Chlorine - virtually colourless - Bromine - Yellow/orange - Iodine - Brown Give the colours of solution when chlorine, bromine and iodine dissolved in hydrocarbon solvents (eg. hexane). - Chlorine - virtually colourless - Bromine - Orange/red - Iodine - Pink/violet What is the physical state (in standard conditions) and colour of chlorine, bromine and iodine? - Chlorine gas is greenBromine liquid is red-brownIodine solid is grey - (Also fluorine atom before chlorine is gas and is pale yellow) Explain the trend in reactivity down the halogen group. - Decrease in reactivity down the group. (Becomes less oxidising down the group). - Because atom becomes larger as you go down, so outer electrons further from nucleus, more shielding from the nucleus. Thus makes it harder for larger atoms to attract the electron needed. Halogens undergo _______ with alkali solution. Give the products for cold and hot solutions. - COLD - halate (I), halide and water - HOT - halate (V), halide and water - eg. of Bromine with Sodium hydroxide - COLD - NaBrO + NaBr + H2O - HOT - NaBrO3 + 5NaBr + 3H2O When halogens react with metals, what does it do to the metal? Give example of how chlorine, bromine and iodine react with hot iron. - It oxidises metals. (gets reduced itself) - As halogens become less oxidising down the group; - Chlorine and bromine react with iron to form iron (III) halides. - Iodine react with iron to form iron (II) - it is less oxidised. Chlorine reacts with Phosphorus (non-metal) to produce what? In excess chlorine, what is produced? - PCl5 (in excess chlorine) When a halogen (except iodine which is less oxidising) is added to Fe2+ ions (Fe(II)) what happens? - Fe2+ is oxidised to Fe3+ ions in solution. - The solution will change colour from green to orange. The reducing power of halides ______ down the group. Why? - Increases down the group. - By losing an electron from outer shell. - Because ions get bigger, electrons further away, also more shielding from inner electrons. Describe the reaction of KCl with H2SO4. What is produced and why? - HCl gas is produced - misty fumes - But HCl is not a strong enough reducing agent (remember reducing power increases down group) so it won't reduce sulfuric acid. Reaction stops there. - It is not a redox reaction (oxidation states have not changed) - Products - KHSO4 + HCl Describe reaction of KBr with H2SO4. What is produced and why? - Br2 fumes - orange - is produced. (along with choking fumes of SO2) - Because HBr is a stronger reducing agent and can reduce H2SO4 in a redox reaction. - Br2, SO2, H2O produced. Describe reaction of KI with sulfuric acid. - Same initial reaction as KBr and KCl, giving HI gas. This gas then reduces H2SO4 (like KBr). - But HI further reduces SO2 to H2S. - H2S gas is toxic and smells of rotten eggs. - 2HI + H2SO4 ---- I2 + SO2 + 2H2O - 6HI + SO2 ------ H2S + 3I2 + 2H2O Give features of hydrogen halides. - Colourless gas - Very soluble - dissolveing in water to make strong acids. (turn blue litmus red) - React with ammonia gas to give white fumes (NH4Cl) In a displacement reaction of halide ions, what can displace what? - Chlorine can displace bromide and iodide ions. (most oxidising). - Br2 (orange solution) formed, or I2 soluton (Brown solution) formed. - Bromine can displace iodide ions. - Iodine cannot displace any of the above (weak oxidising agent) - eg of ionic equation - Cl2 + 2Br- --- 2Cl- + Br2 - You can make changes easier to see by mixing with organic solvent like hexane (halogen will dissolve, but halides won't - 2 layers formed) - A halogen will displace halide from solution if halide is below it in periodic table. How can you test whether it's Cl-, Br- or I- ? Results? - Add dilute nitric acid (remove ions that might interfere), then add silver nitrate solution. A precipitate will form. - Chloride - white precipitate, dissolves in dilute NH3 - Bromide - cream precipitate, dissolves in conc NH3 - Iodide - yellow precipitate, insoluble in NH3 How do silver halides (eg. AgBr) react with sunlight? - Silver halides decompose when light shines on them. - Producing silver and the halogen. - eg. 2AgBr ----- 2Ag + Br2 Halide ions are? - Reducing agents - More strongly reducing as you go down group. - eg. 2Fe3+ + 2I- ----- 2Fe2+ + I2 - (Iodide can reduce, but chlorine can't - more reducing as you go down group - may need to predict) What equipment would you use to measure out the known solution in a flask during preparation of titration? And what would you use to add the unknown solution during the titration? - Pipette - can only measure one volume of solution (therefore in preparation) - Burette - can measure different volumes, and let you add solution drop by drop. (therefore during titration) What are the 2 main indicators used for acid/alkali titrations? - Methyl orange - yellow to red when adding acid to alkali. - Phenolphthalein - red to colourless when adding acid to alkali. - (Used because colour change occurs very quickly over a very small pH range) How do you work out the concentration from titrations? - Write out balanced equation - Work out moles for known solution - Use molar ratio to work out moles of unknown solution - Use volume and moles to work out concentration. - (Remember, concentration is in mol dm-3 ) What is a standard solution? One whose concentration is known and does not change over time. What do you need to do to convert mol dm-3 into g dm-3 ? - Use formula - moles = mass / molar mass - So, multiply the moles by molar mass to get grams. - eg. 0.36 mol dm-3 of NaOH will be 0.36 x 40 = 14.4 g dm-3 What are some uncertainties found when measuring substances during titrations? What is a good measure of uncertainty? - Good measure of uncertainty is maximum possible error (eg. uncertainty from a burette that marks every 0.1cm3 is maximum error of 0.05cm3) - Uncertainty of weighing substances (eg. nearest 0.01g - real mass could be 0.005g smaller or larger) - Pieces of equipment measuring liquid such as fixed-volume pipettes and volumetric flasks. (Manufacturer provide these uncertainty values). Outline some methods to minimise some uncertainties. - Buy most precise equipment available (though this is not easily done) - Check accuracy of pipette by transferring its contents to a weighed beaker and find mass and density to work out exact volume delivered. - For any reading/measurement you can calculate percentage uncertainty using equation - (uncertainty/reading) x 100. - As this shows, the larger the volume being measured, the less percentage uncertainty there will be - so plan titration with larger volume. - Same principle can be applied to other measurements like weighing solids. Outline the two types of errors. - Systematic errors: are the same every time experiment is repeated. May be caused by set-up or equipment used. (eg. 10cm3 pipette might actually be measuring 9.95cm3 leading to inaccuracies every time). - Random errors: different every repeat - will sometimes be above the real value, or sometimes it could be below. (eg. random errors measuring burette reading). - Repeating experiment will deal with errors (high values cancel out low values) but not systematic errors. Results get more reliable, but not more accurate. How do you calculate the total uncertainty in the final result? - Find percentage uncertainty fro each part of experiment (mainly volume measurement) - Add individual percentage uncertainties together. This gives percentage uncertainty in final result. - Use this to work out actual total uncertainty in the final result (this will be in mol dm3 if final result is a concentration) Suggest some causes of errors in a titration. - Air bubbles in burette - Contaminated equipment - Parallax when reading meniscus line - Impurities in solution (no substance will be 100%) - Errors in transferring substances (some may be left in container etc) - All equipment calibrated to be used at 20oC - if lab is warmer or colder - burette may not be accurate. - Balance/equipment will only show to a certain decimal place. - Judging end point of titration Judgement of the end point of an Iodine-Sodium Thiosulfate titration by? Adding a few drops of 1% starch solution when solution colour has become very pale. (Dark blue colour will suddenly go colourless at the end point) What can the iodine-sodium thiosulfate titration be used to calculate? - The concentration of an oxidising agent (such as potassium iodate (V) ) - Percentage purity of potassium iodate (V) Outline how you could find out conc of oxidising agent from sodium-iodine thiosulfate titration. HOWEVER CHECK WITH TEACHER! - React a certain volume of unknown conc of potassium iodate (V) (oxiding agent) with excess potassium iodide solution. [Iodate(V) ions oxidise some of the iodide ions to iodine] - Titrate resulting solution with sodium thiosulfate (known conc). To "sharpen" result - add starch solution at end. - Use equation to work out moles of iodine in the solution. - Use original balanced equation and moles of iodine to find out concentration of potassium iodate (V) solution. Outline what you need to find out in each step of the iodine/thisulfate titration to find out percentage purity of potassium iodate (V). - Moles of Sodium thiosulfate used - From this, moles of iodine that reacted with this. - From this, moles of Iodate (V) ions involved in producing this iodine. - From this, mass of potassium iodate used. - Percentage purity = (mass KIO3 calculated / mass of crude KIO3) x 100 Give the ionic equation for potassium iodate (V) reacting with acidified potassium iodide solution. And also the ionic equation of the product of this reaction reacting with thiosulfate. - IO3-(aq) + 5I-(aq) + 6H+(aq) ---- 3I2(aq) + 3H2O(l) . I2 + 2S2O32- ----- 2I- + S4O62-
Mercury’s rocky surface has yielded evidence that two distinct rock types may have originated from massive lava flows across the planet’s surface. By analysing the chemical composition of rock features on Mercury, scientists at MIT have shown that about 4.5 billion years ago it may have harbored a massive magma ocean. Scientists analysed data obtained by MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging), a NASA probe that has been orbiting Mercury since March 2011. In September 2011, another group of scientists analysed X-ray fluorescence data from the probe; it was this data that showed two distinct rock compositions on the planet’s surface. The rocks on Mercury’s surface reflect an intense fluorescent spectrum; Mercury takes the brunt of the Sun’s rays. Scientists can measure the surface rocks’ fluorescent spectrum with X-ray spectrometers in order to find out the chemical composition of the surface materials. MESSENGER’s onboard X-ray spectrometer measured the X-ray radiation generated by Mercury’s surface as the craft orbited the planet. The MESSENGER science team was able to parse these energy spectra into peaks: each of these peaks represents a different chemical element within the rocks. This was how the group identified two main rock types on Mercury’s surface, which most resemble terrestrial volcanic rocks known as basaltic komatiites. To discover what geological processes formed these distinctly different surface compositions, the team from MIT translated the chemical element ratios into the corresponding building blocks that make up rocks (eg magnesium oxide, silicon dioxide and aluminum oxide). They then recreated the rock types in the lab, using the compositional data. These synthetic rocks were melted in a furnace at a variety of temperatures to simulate different geologic processes. Once the samples were cooled, the researchers picked out tiny crystals and melt pockets for analysis. Initially the scientists looked for scenarios where both rock types could be related however the two compositions were found to be so different that they could not have come from the same region. Only one phenomenon seemed to explain the two compositions: an ocean of magma that over time formed different compositions of crystals as it solidified. These crystals then remelted into magma that then erupted onto Mercury’s surface. This magma ocean likely existed within the first 1 million to 10 million years of Mercury’s existence, more than 4 billion years ago. The processes of collision and accreting early in the formation of our solar system may have produced enough energy to completely melt the planet, which would make an early magma ocean very feasible. This image of Mercury was produced by using images from the colour base map imaging campaign during MESSENGER’s primary mission. The colours do not display what Mercury would look like to the human eye, but they enhance the chemical, mineralogical, and physical differences between the rocks that make up Mercury’s surface. Image Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington Mercury speeds around the sun every 88 days, traveling through space at nearly 50 km (31 miles) per second — faster than any other planet. One Mercury solar day equals 175.97 Earth days. Mercury’s elliptical orbit takes the small planet as close as 47 million km (29 million miles) and as far as 70 million km (43 million miles) from the sun. If one could stand on the scorching surface of Mercury when it is at its closest point to the sun, our star would appear more than three times as large as it does when viewed from Earth. Because Mercury is so close to the sun, it is hard to directly observe from Earth except during twilight. Mercury makes an appearance indirectly, however — 13 times each century, Earth observers can watch Mercury pass across the face of the sun, an event called a transit. The transits fall within several days of May 8 and November 10. The first two transits of Mercury in the 21st century occurred 7 May 2003 and 8 November 2006. The next will occur on 9 May 2016. Temperatures on Mercury’s surface can reach 800 degrees Fahrenheit (427 degrees Celsius). Because Mercury’s atmosphere is so thin, the surface cannot retain that heat so nighttime temperatures can drop to -290 degrees Fahrenheit (-179 degrees Celsius). Mercury’s thin atmosphere, or exosphere, is made up of atoms blasted off the surface by the solar wind and micrometeoroid impacts. Because of solar radiation pressure, the atoms quickly escape into space and form a tail of neutral particles. Though Mercury’s magnetic field has just 1 percent the strength of Earth’s, the field is very active. The magnetic field in the solar wind episodically connects to Mercury’s field, creating intense magnetic tornadoes that funnel the fast, hot solar wind plasma down to the surface. When these ions strike the surface, they knock off neutral atoms and send them on a loop high into the sky where other processes may fling them back to the surface or accelerate them away from Mercury. Mercury’s surface resembles that of Earth’s Moon, scarred by many impact craters resulting from collisions with meteoroids and comets. While there are areas of smooth terrain, there are also lobe-shaped scarps or cliffs, some hundreds of miles long and soaring up to a mile high, formed by contraction of the crust. The Caloris Basin, one of the largest features on Mercury, is about 1,550 km (960 miles) in diameter. It was the result of an asteroid impact on the planet’s surface early in the solar system’s history. Over the next several billion years, Mercury shrank in radius about 1 to 2 km (0.6 to 1.2 miles) as the planet cooled after its formation. The outer crust contracted and grew strong enough to prevent magma from reaching the surface, ending the period of volcanic activity. Mercury is the second densest planet after Earth, with a large metallic core having a radius of 1,800 to 1,900 km (1,100 to 1,200 miles), about 75 percent of the planet’s radius. In 2007, researchers using ground-based radars to study the core found evidence that it is molten (liquid). Mercury’s outer shell, comparable to Earth’s outer shell (called the mantle), is only 500 to 600 km (300 to 400 miles) thick. The first spacecraft to visit Mercury was Mariner 10, which imaged about 45 percent of the surface. In 1991, astronomers on Earth using radar observations showed that Mercury may have water ice at its north and south poles inside deep craters that are perpetually cold. Infalling comets or meteorites might have brought ice to these regions of Mercury, or water vapor might have outgassed from the interior and frozen out at the poles. In 2008 and 2009, NASA’s MESSENGER mission performed two close flybys of Mercury. By the second flyby, the spacecraft had imaged about 80 percent of the surface at useful resolution and made discoveries about the magnetic field and how Mercury’s crust was formed. The flybys employed Mercury’s gravity to help ease the spacecraft into orbit in March 2011. The spacecraft is studying and imaging Mercury from orbit and will map nearly the entire planet in color. MESSENGER is the first spacecraft to orbit Mercury. How Mercury Got its Name Mercury is appropriately named for the swiftest of the ancient Roman gods. Mercury, the god of commerce, is the Roman counterpart to the ancient Greek god Hermes, the messenger of the gods. - 1631: Pierre Gassendi uses a telescope to watch from Earth as Mercury crosses the face of the sun. - 1965: Though it was thought for centuries that the same side of Mercury always faces the sun, astronomers find the planet rotates three times for every two orbits. - 1974-1975: Mariner 10 photographs roughly half of Mercury’s surface in three flybys. - 1991: Scientists using Earth-based radar find signs of ice locked in permanently shadowed areas of craters in Mercury’s polar regions. - 2008: MESSENGER’s first flyby of Mercury initiates the most comprehensive study yet of the innermost planet. The three flybys revealed the side of the planet not seen by Mariner 10. Also, many more images and discoveries were obtained by these flybys. It happens three times a year, Mercury goes retrograde and changes all our plans. The Thinking Planet, Mercury the messenger goes retrograde (out of phase) three to four times a year for around 3 weeks each time. Mercury’s movement appears to go backwards, this is called Mercury retrograde and it can cause all sorts of mayhem! I, personally have a ‘love affair’ with this planet… but I’ll keep this post straight forward.. 😀 Mercury, known as the planet of intellect can affect the way you think, communicate and relate. It’s known as the celestial messenger of the gods. It has a strong affect on all forms of travel and communication as well as technical correspondence and equipment. It starts at 6 degrees Aries, from March 12 to April 5. We still have Mars Retrograde and Mars doesn’t go direct until the April 15. So we get a double retrograde. Mercury will zig-zag through the high-flying stars of Pegasus, including the infamous fixed star Scheat, then reverse over the tail of Cetus the sea-monster. This is quite a daring mission for Mercury then. Mercury retrograde and Scheat both have travel chaos in common. Obviously the world cannot grind to a halt during this time, but it will be interesting to see if we get more than our fair share of crashes during the time they are conjunct. Mercury on Scheat brings accidents and trouble through writings. 2012 Retrograde Dates March 12–April 10 July 15–August 8 Mercury is the planet that most people seem concerned with when thinking of retrogrades, perhaps it is the one they most notice the effects of. Mercury does rule communications, communication devices such as telephones and computers, transportation and our daily routine so a retrograde phase can cause havoc in our otherwise orderly worlds. The affect of a Mercury retrograde can cause these things to malfunction or for instructions to be misinterpreted and this makes things go awry and for people to change their minds, cancel appointments and forget to turn up when they should. Those with some knowledge of astrology know that the affects of a Mercury retrograde occur a little before and after the dates given. The reason for this is that Mercury will retrograde and travel back to a certain degree before it stations again and goes forward. It then has to make up ground until it reaches the original degree before it went retrograde. Mercury Shadow 27 Feb 2012 ~ 23 Pis Mercury Retrograde 13 Mar 2012 ~ 6 Ari Mercury Direct 5 Apr 2012 ~ 23 Pis Mercury Shadow 23 Apr 2012 ~ 6 Ari Mercury Shadow 22 Jun 2012 ~ 1 Leo Mercury Retrograde 16 Jul 2012 ~ 12 Leo Mercury Direct 9 Aug 2012 ~ 1 Leo Mercury Shadow 22 Aug 2012 ~ 12 Leo Mercury Shadow 19 Oct 2012 ~ 18 Sco Mercury Retrograde 7 Nov 2012 ~ 4 Sag Mercury Direct 27 Nov 2012 ~ 18 Sco Mercury Shadow 15 Dec 2012 ~ 4 Sag A Man With a Message Mercury comes from the Latin word merx, or mercator, which means merchant. Mercury is the name given by the ancient Romans to the Greek mythological god Hermes. Mercury is depicted as a male figure having winged sandals and a winged hat, indicating the ability to travel quickly. He was the official messenger of the ancient gods and goddesses and, as such, governed communications. In 1782, Mercury became the first symbol of the United States’ postal service. Today, he is the Icon of an International floral delivery service. In astrology, Mercury influences travel, literature, poetry, merchants, and thieves. He is cunning and witty at a moment’s notice. But he is also recognized as a trickster and prone to misbehavior. In Aries: Expect to be frustrated and frazzled. Assertive, impulsive Aries wants to move ahead, and all of the energy is going backwards. Watch what you say and how you say it. Pay attention to what people say to you; you might be pleasantly surprised. In Taurus: Take time to formulate your thoughts. Taurus, an unhurried sign, slows down the mental processes. He also governs banking, so delay money matters. Review financial matters, and position yourself for growth. In Gemini: Because Gemini rules communications, be prepared for miscommunications. Expect lots of phone calls or none, and lost or misplaced mail. You may not articulate clearly. At the same time, old friends may reconnect. In Cancer: Expect annoyances at home with baking, gardening, and household duties under domesticated Cancer. Complete repair projects that weren’t finished or done correctly. In Leo: Avoid speculative investments. It is not a good time to buy or sell or do any trading. Instead, analyze your investment portfolio. Use your know-how and advisory skills to help friends and associates. In Virgo: Challenging situations arise, especially in the workplace. Expect product delays and equipment breakdowns, as well as crankiness among coworkers under finicky, detailed Virgo. Double-check your work before you call it finished. In Libra: Accept your physical attributes; do not have a makeover. Indecision resigns, so limit purchases—or risk returning them. Libra, representing beauty, grace, charm, and diplomacy, is out of balance. Refresh, relax, and rejuvenate. In Scorpio: Emotions rule—not common sense—so beware. Avoid affairs of the heart. Passionate Scorpio is also secretive, and your secrets may seep out. Keep them in a diary. In Sagittarius: It is not a time to travel, so reschedule or just expect delays, lines, and lost directions. Instead, take care of local affairs. Patience and a sense of humor are needed. In Capricorn: Avoid buying, selling, or renting real estate under Capricorn, the sign that governs property matters. Expect problems with paperwork, packing, and movers. Reunite with family or vacation at home. In Aquarius: With Mercury retrograde in Aquarius, the sign that governs relationships, friendships are put at a risk. Petty squabbles, misunderstandings, and miscommunications abound. Know who your friends are. In Pisces: Foggy thinking, daydreams, and escapism are the norm; day-to-day realities confound otherwise clear heads when Mercury, the planet that rules logic, is in Pisces, which governs illusion. Practice creative pursuits—writing, dancing, photography, film, or painting. - Aries New Moon – Break Through and Break Free (astroartisan.com) One thought on “Mystical, Magical ~ Mercury..”
Hundreds of planets have now been detected outside our solar system. So far, very little is known about the physical conditions on these fascinating objects. Kevin Heng from the Institute for Astronomy at ETH Zurich is attempting to close this gap with sophisticated model stimulations. In 1992, American scientists discovered the first planets outside our solar system a scientific sensation at the time. Well over 500 objects have now been confirmed by astronomers as exoplanets. There are also several hundred other objects identified as possible exoplanets by the Kepler space telescope, according to a recent NASA statement. With this abundance of newly discovered planets, the question as to the exact structure of these objects and the conditions prevailing on them naturally arises. This is a field of research still in its infancy and shrouded in speculation, since the information that can be derived directly from measurements is severely limited. For example, astronomers can only indirectly infer whether a particular exoplanet has a solid core and how thick its atmosphere is. The main problem when studying exoplanets is that they emit only very little light compared to the star around which they orbit. The low intensity of light makes it exceedingly difficult to carry out spectral analyses that yield information about the chemical structure of the object. Nevertheless, a few things can now be said about these exoplanets, as Kevin Heng, Zwicky Fellow at the Institute for Astronomy at ETH Zurich, showed in an article published recently in the scientific journal Monthly Notices of the Royal Astronomical Society. Based on observations up to now, Kevin Heng used simulation models to reconstruct the climatic conditions that might prevail on various exoplanets. For example he calculated that violent winds with speeds of several kilometres per second are present on one of the biggest of the exoplanets discovered so far. This exoplanet studied by Heng orbits relatively close to its star. Experts call such objects hot Jupiters. Astronomers conclude by theoretical inference that the orientation of such planets relative to their star is fixed, i.e. the light from their star always illuminates them on the same side similar to the way the same side of the Moon always faces towards the Earth. Thus, hot Jupiters are expected to have permanent day and night sides. In the case of a hot Jupiter, one expects to measure the highest temperature when the illuminated face of the planet is fully visible. However, measurements by other groups show that the maximum temperature is shifted with respect to the point where starlight is at its most intense. Kevin Heng can now explain this surprising finding with his models: The shift occurs because strong winds in the exoplanets atmosphere carry part of the heat from the day side to the night side. The exoplanet is basically trying to reduce the temperature difference between its day and night sides. Solid concordance between the models Kevin Heng bases his statements on three-dimensional model simulations, which he runs on the Brutus computing cluster at ETH Zurich. He explains that, We are in a difficult situation. We cannot use direct measurements to verify the calculations we make, nor do we know per se whether our modelling strategies are correct at all. To gain slightly more certainty at least with regard to the models, he compared two models based on different methods of solution. Both are able to replicate the conditions in the Earths atmosphere correctly, and produce very similar results for the exoplanets that were studied. Kevin Heng explains that, The aim of this comparison was to create a standard against which other models can be validated. There is now also concrete evidence that Kevin Hengs calculations may be correct at least in their order of magnitude: based on frequency shifts in the absorption lines associated with an exoplanet, another group of researchers has also concluded that winds with a speed of two kilometres per second exist on this planet. This represents a high level of concordance, considering how little tangible knowledge exists about exoplanets. Explore further: Exoplanet atmospheres detected from Earth for the first time Heng K, et al. Atmospheric circulation of tidally-locked exoplanets: a suite of benchmark tests for dynamical solvers. Mon. Not. R. Astron. Soc., in press (2011) arxiv.org/abs/1010.1257
Genetic Hearing Loss in Children - Symptoms to Watch For Fifty percent of hearing loss present at birth, also known as congenital hearing loss, is caused by a genetic defect. Genes are cells inside the body that instruct it how to work and grow. They tell the body how to make specific proteins that affect everything from blood sugar, collagen production or hair and eye color. Scientists have identified eight different genes that are the most likely the cause of genetic hearing loss in children. Genetic hearing loss is typically autosomal dominant, autosomal recessive or X-linked. Autosomal dominant hearing loss occurs when one parents has the dominant gene for hearing loss and usually suffers from hearing loss themselves. This gene is then passed on to the child. If one parent has the hearing loss gene there is at least a 50% chance the child will have it as well. If both parents or both grandparents on one side have the gene there is an even greater chance the child will experience hearing loss. However, just because a person has a dominant gene for hearing loss does not mean they will experience hearing loss in their lifetime. That is why it is possible for two deaf parents to give birth to a child who can hear. Autosomal recessive hearing loss occurs when both parents carry a recessive gene for hearing loss. Because the gene is recessive, neither parents will suffer from hearing loss, therefore they may not know they even have the gene. When this occurs, the child born has a 25% chance of experiencing genetic hearing loss. However, as long as neither parents or any family members suffer from hearing loss, there is no expectation that the child will experience hearing loss. In some cases the mother can carry the recessive gene for hearing loss on the sex chromosome. The X and Y chromosomes determine the sex of the child. Men carry both X and Y chromosomes while women carry two X chromosomes. The hearing loss gene is recessive and occurs on the X chromosome from the mother. If she passes this chromosome onto her child, only a boy will experience hearing loss. If she passes it onto her daughter she will be a carrier for the trait, but will not experience it herself. Only 3% of genetic hearing loss occurs because of a recessive gene in the X chromosome. Of the 50% of genetic hearing loss cases, genetic syndromes cause 30%. These can range from Down syndrome, Usher syndrome, Trencher Collins syndrome, Crouzon syndrome and Alport syndrome. Today, most hospitals perform a hearing test a few hours after a child is born, which has allowed for early diagnosis and intervention for genetic hearing loss. Signs of hearing loss are speech delays, poor performance in school, frequent ear infections, responding inconsistently to sounds, talking in a very soft or very loud voice and continuously studying a person’s face while they are talking. If any of these symptoms are present, parents should raise their concerns with their child’s pediatrician. Many city health departments typically offer free screenings a few times a year for those who do not have a regular family doctor. The earlier a diagnosis is made, the more quickly parents can begin to help their children adapt to hearing loss. This can include teaching them sign language, enrolling them in special programs or joining support groups for themselves. Living with hearing loss can be difficult, especially for young children; however with the proper support and education from family, friends and teachers, these children can and do thrive and live full, productive lives.
Age of Realism Keith TankardThe Time Traveller Updated: 14 December 2009 (Contact the Project Coordinator) THE RISE OF PRUSSIA, 800 TO 1815 Germany was a powerful empire in 800, when Charlemagne took the crown, and again in 962 when Otto the Great became emperor. However, from a unitary state that came to be known as the Holy Roman Empire (968 A.D.), rivalry between the feudal warlords caused a constant fragmentation of the area. By the time of Martin Luther (1517), there were well over 300 states, some small (like Hesse-Cassel), some large (like Bavaria, Saxony, Brandenburg). During the 16th century, at about the time of Luther, the German Emperor was embarking upon a campaign to regain the lost states. Ultimately successive emperors would try to subdue the independent warlords in much the same way as the French kings would do in France. The German warlords would therefore grasp at any straw which would save them from such domination. Martin Luther provided such a straw. The German warlords grasped his rebellion against the Catholic Church to highlight their own struggle against the Emperor. Ultimately a series of so-called Religious Wars would be fought, culminating in the final destruction of the German Empire. The Treaty of Westphalia (1648) would guarantee that Germany would remain fragmented for another 200 years. Westphalia would also ensure that the German Empire was an entity only in name. The Holy Roman Empire would no longer exist in any practical terms. Instead, the Emperor would concentrate his attention on his home domain, namely Austria. Politically and economically, therefore, Germany was disunited. Tariff barriers netted up both land and rivers. There was no monetary standard, the lower Rhenish circle alone having over 60 mints. In the meantime, one of the larger states in north-east Germany, namely Prussia, would begin a period of rapid expansion. Originally the Mark of Brandenburg, a German buffer state against the Slavs, the territory was gradually added to. Great expansion occurred in 1618 when the Hohenzollern prince inherited East Prussia. Until the time of Frederick William, who inherited the throne during the 30 Years War, Brandenburg-Prussia was ravaged by friend and foe alike. As a result, the King decided to build up an adequate army to protect his territories. By the end of his reign, the army numbered some 27 000 soldiers and Brandenburg-Prussia was second only to Austria as the strongest power in Germany. Frederick III (1688-1713) lacked statesmanship and talent for rigid economy and, as a result, Brandenburg-Prussia slowly regressed. In 1700, however, during the war of the Spanish Succession, he lent aid to Austria on condition that he be recognised as King. His wish was granted and he become known simply as King in Prussia. He quickly changed the title, however, to become known as Frederick I, King of Prussia. Frederick William I (1713-1740) who succeeded him was both eccentric and uncouth. Nevertheless, he worked with great energy and continued to centralize the Prussian state, encouraging commerce and industry. He kept a strict control over the economy and increased the army to 83 000 men, which made it the fourth largest in Europe. The army was in fact not used during his life-time but went into operation during the War of the Austrian Succession (1740-1748) during the reign of Frederick the Great. This war was quickly followed by the Seven Years War (1856-1873). During this time Prussia found herself up against all her enemies in one great alliance: Austria, France, Russia, Sweden, and Saxony. Their express purpose was to dismember Prussia to cut her down to size. It is to Prussia's credit that she withstood the multiple attack and the war ended in a virtual stale-mate. The Treaty of Hubertusburg restored Europe to the pre-war situation which meant that Prussia remained virtually intact. Although Prussia had made no territorial advances during the war, she did however make substantial political gains, most notably in terms of political and military prestige. The state was now recognised as one of the Big Five in Europe. During the 19th century, Prussia would be able to dispute with Austria for the control of leadership in Germany itself. By 1863 had become the dominant nation in central Europe. By 1871 was the centre of a new German Empire. It is possible that Prussian militarism was founded on the ancestral leaders, the Teutonic Knights. The territory of Prussia had been colonized during the High Middle Ages (about the 13th century) by Teutonic Knights who had been sent in by the German Emperor to put down unrest on the eastern borders of his Empire. As rulers of that territory, they established a reputation of ruthlessness. The Teutonic Knights, having conquered the traditional Prussian nobility, thereupon turned them into serfs (akin to slaves). The Teutonic Knights, in the meantime, established themselves as a new, although alien, ruling class. Even though fairly small in number, they were not absorbed by the Prussian people but rather tended to absorb the Prussians into their military culture. The Prussian language was forbidden in speech between ruler and peasants with the result that German slowly became the official language of the region. The quality of the land probably also went a long way to developing the military character of the people. Prussia was a barren land with a harsh climate. This caused the evolution of a dour people, austere and lacking in emotionalism and artistic awareness. The Renaissance, for instance, completely passed them by. With the passing of time, moreover, the ruling Teutonic class lost its ties with the rest of Germany so that, by the 15th century they had only the German language in common with the rest of Germany. During the 30 Years War, Prussia was overrun by invading armies so that it became necessary to build up a strong army for defensive purposes. The militaristic philosophy remained long after it was no longer necessary. It was in any case the day of small armies. Great distances to be crossed on foot meant that large and unwieldy armies made little sense and created large logistical problems. Armies were therefore fairly small, or at least operated in small units. It was therefore possible for a small state to have a powerful and efficient army. Prussia was unique in that it had far more troops in proportion to the total population than countries such as France, Austria and Russia. It also developed a philosophy as well as a political and economic foundation which was high militaristic. Almost its entire state budget was ear-marked for military expense, even when there was no warfare. In the days of Frederick, the Great Elector, all his personal taxation was devoted to maintenance of the army, even though Prussia never went to war during his rule. He also lived a most frugal life, avoiding the lavish expenditure of the other European rulers, most notably Louis XIV. Furthermore, the army consisted almost solely of the landed aristocracy, the Junkers. In fact, army and Junkers became practically synonymous, with no place in the military for any other group of people, such as the bourgeoisie. So much were the Junkers identified with the army that most members of the Junker families wore military uniform as standard everyday dress. A sense of service to the king and state was seen as the supreme human virtue. It was this that forged national unity out of the amalgam of its population. Stress was therefore laid on duty, obedience, service and sacrifice. The Junkers gave absolute service to the king in return for absolute authority over the serfs. Serfdom in Prussia sank lower than almost anywhere else in Europe, where the peasant had almost no rights and was severely oppressed. Furthermore, the class structure was virtually frozen. It was almost impossible for a serf to move up into the Middle Class. It was equally impossible for the small band of bourgeoisie to move into the aristocracy by means of buying landed property which the Junkers, in any case, were forbidden to sell. There was also little by way of wealth in Prussia. Most of the bourgeoisie were government officials. The merchants made most of their money by supplying food, clothing and weapons to the army. The economy was therefore almost entirely military centred, and the military was entirely Junker centred. THE 1848 REVOLUTIONS: TOWARDS A UNITED GERMANY The years of Napoleonic rule in south Germany had seen re-organisation and domestic reform. These ideas permeated into Prussia as well but were not really accepted in the echelons of power which remained autocratic. The result was an intellectual ferment which moved in various directions. Up until the 18th century the intellectuals were generally happy to accept the status quo. After the French Revolution, however, the new ideas emanating from France set the intellectuals off in a different direction. Two essential political philosophies took root: nationalism and democracy. Socialism would begin to emerge also in the cities to the west, particularly as the industrial revolution spread during the early 19th century. The chief problem, however, lay in the fact that the philosophies were seldom isolated. It was not a case that one person was a nationalist, another a democrat, another a socialist. All ideas were absorbed together, but in different proportions and dilutions. This was not a problem for a country like France (which was already a unit) or Italy (which could become a unit). The German Confederation was different. In the first place it contained states like Austria and Prussia which had substantial minority groups. It also contained two states (namely Austria and Prussia) which would almost certainly compete with each other if one united Germany were created. Metternich perceived this when he hosted the Congress of Vienna in 1815. He realised that it was crucially important to maintain a loose federation in Germany which neither encouraged nationalism nor allowed Prussia to compete against Austria. For the following two decades he also attempted to ensure that reformist policies were suppressed. In one sense the Congress of Vienna did Germany and Europe a disservice. Prussia, by a centuries-old tradition of the Teutonic Knighthood, looked eastward for expansion. The smaller German states to the west were not attractive. Indeed, because they had been ruled by Napoleon, they had become centres for "modern" ideas like liberalism, socialism and democracy. These were the very "vices" which the Prussian aristocracy held as anathema. The Congress of Vienna, however, changed all that. In order to create a strong buffer state against further French aggression, the territory of West Prussia was formed. This forced the Prussians to look westward and the desire began to emerge to consolidate Prussian territory across northern Germany. In essence it caused Prussia to desire expansion into Germany. Note the subtle difference between this desire and that of the German nationalists who wished for a unification of the German people. Unification was a cultural and philosophical stance, while expansion of Prussia into Germany was a militaristic one. On the other hand, economic development became noticeable during the years between 1815 and 1850. In 1818 a customs union or Zollverein was established to eradicate oppressive tariff barriers which divided the German provinces. By 1834 the whole of Germany, except Austria and some of the smaller states, had joined. The Zollverein, together with rapid industrialisation based on Germany's natural resources of coal and iron, gave the Prussian economy a strong boost and laid the foundations for later German unity. Moreover, the domination of Prussia over the Zollverein laid the foundation for Prussian political hegemony. It thereby also increasingly weakened Austria's hold on Germany. By 1852 economic resources, industrial development and the financial position in many of the German states were strong. Extensive railways made Germany capable of using its mineral resources and extending its industrialisation. Moreover, because of its advanced industrial strength based on coal and iron reserves in West Prussia (the Saar Basin especially), Prussia was capable of developing its railway system as well as manufacturing superior instruments of war. The Zollverein had therefore given Germany an economic unity which became the foundation of a political unity. There was a strong demand for further free trade, a unified system of coinage and a unified legal system. These demands, supported by the powerful Prussian economic structure, made Prussian nationalism a potent force by the 1860s. Although eventual unification was brought about by a combination of diplomacy and warfare, the achievement could never have been so rapid if it had not been for the powerful Prussian economy. German unity, said the British economist Keynes, was not built on Bismarck's dictum of "blood and iron" but more likely on Prussia's "coal and iron". The accession to the Prussian throne of Frederick William IV in 1840 encouraged the liberal minded to press for liberal reforms. They had failed to realise, however, that although the king was willing to make some liberal reforms, he was not prepared to initiate a constitutional monarchy. He sought rather the recreation of the German or Holy Roman Empire but one centred on Berlin rather than Vienna. The revolutionaries in their midst also failed to take note of Prussian militarism and the sheer strength of the Prussian army. When rioting broke out in Berlin in 1848 and the news of Metternich's flight from Austria caused the king to make constitutional concessions, the Prussian military leaders held firm in the rest of Prussia. In September 1848, therefore, with the news of the success of Austria's crushing of the revolutions there, Frederick William's hand was strengthened and his army quickly put down the up-rising in Berlin. In the rest of Germany the liberal leaders made use of the uncertainty in Prussia and Austria to set up their Vorparliament in Frankfurt Its aim was to draft a constitution for the whole of Germany. It failed, however, because its representatives simply could not agree on objectives. Ultimate success or failure hinged on the question of German nationalism. There were those who wanted a united Greater Germany which would include Austria but exclude those non-German sectors of Austria. There were those who wanted a Lesser Germany which would omit Austria altogether. Since both views contained major political implications, and since the Vorparliament was essentially without teeth, it all eventually broke down. Yet King Frederick William offered an alternative to the Vorparliament, namely the creation of a union of any German states which wished to join Prussia. Many states took advantage of the offer. A constitution was drawn up, elections held and a Parliament was established but at the critical moment, the King bowed to opposition from Austria and abandoned the idea. At Olmutz in 1851, the old Confederation of the Rhine was revived and Austria was again the dominant power in Germany. Most states in Germany looked to the powerful Austria in the south for leadership. Already, in 1848, the Germans had realised the difficulty of uniting Germany because of the presence of Austria. When Bismarck aimed at Prussian domination of northern Germany, his aim was the exclusion of Austria. Such a policy, however, could only mean war because Austria would not willingly be excluded or allow herself to play second fiddle. PRUSSIA'S 1852 CONSTITUTION Although it seemed that the Olmutz agreement had put the clock back to 1815, this was not quite so. The Prussian King did not tear up his constitution but left the state with a parliamentary system which was seen as a first step to greater things in the future. Austria was not quite what it was before because Metternich had gone and his place was taken by Prince Schwarzenberg. The new Prussian constitution provided for monarchical sovereignty but with a legislature consisting of two houses. The Lower House was an elected assembly, based on universal adult suffrage split along class lines. The male population was divided into three groups, with the upper group receiving four votes to the one of the lower group. Parliament was, however, a toothless dog. It could debate but not control. Nevertheless, it suited the Prussian liberals who wished to see an evolutionary system of government, and not revolutionary chaos as had happened in France. They saw themselves as advisers to the King. In the future they would gain more, little by little. Parliament, however, was also deeply divided. There were 352 deputies. By 1862, when Otto von Bismarck became Chancellor, the government could only count on the support of about 11 conservatives. Then there was the Catholic Centre Party and the Polish deputies (totalling about 56) who tended to be neutral. To the left of these were the Liberals, numbering about 80% of the deputies. The Liberals themselves, however, were split. There were left-wing Liberals, centrist Liberals and right-wing Liberals. Even the Progressive Party, easily the largest German party at the time, was split down the middle. In short, an astute politician like Bismarck could manipulate them for his own purposes. The Schwarzenberg system also spelt a difference. While Metternich was intent on maintaining a weak Confederation which would not lead to a Prussian challenge for leadership, Schwarzenberg saw things differently. He wanted his government to actively challenge Prussia. This would quickly change Prussia's view of Austria. When Bismarck appeared on the scene as Prussia's delegate at the Confederal Diet in 1851, he was essentially a staunch supporter of the Old Order. He accepted that Austria would remain supreme in Germany. Quickly his attitude changed, however, when he saw Schwarzenberg's antagonistic posture. Soon thereafter Bismarck took up the challenge and was advising his government on the need to expand into Germany. OTTO VON BISMARCK: PRUSSIAN NATIONALIST Some words about Bismarck. He was born in 1815 of a wealthy Junker estate and was brought up in Berlin in contact with the court of the Hohenzollerns. In 1847 he was a member of the Prussian Diet at Berlin and showed himself a total reactionary, but also a Prussian nationalist. He served for a time as Foreign Minister to Russia in St Petersburg. While there, he won the friendship and respect of the Tsar, a situation he would be able to utilise later to good effect in his wars against Austria and France. He became Chancellor of Prussia in 1862. That Bismarck was a Junker was important for he espoused Junker tradition and philosophy. He did not believe fundamentally in German unity but in an expanded Prussia which would ultimately dwarf Austria as the powerhouse of the region. That could not be achieved by means of a democratic and parliamentary decision but only by victories on the battlefield. He was at heart, therefore, a Prussian militarist. One may probably safely describe Bismarck as a ruthless politician and a believer in Realpolitik. It would not be safe, however, to claim that he had a plan for the unification of Germany which he put it into action bit by bit. Such a claim comes from Bismarck himself, in his old age, when he was attempting to be remembered as the greatest statesman Germany had ever produced. What has become clearer with recent research is that Bismarck believed in the flow of history. History, according Bismarck, is like a great river, with rapids and waterfalls. The state is like a boat floating on that river. As such, it must always flow with the drift of the water. It is, however, the task of the astute statesman to guide the state down the better and safer channels. What is important, said Bismarck, is NOT to attempt to change the course of history, which is impossible, but to GUIDE the state in that course. To be successful, the good statesman must always keep his options open. He must also have several options open to him at all times, several irons in the fire, so as to choose the best option only when it becomes obvious that it is the best option. The Schleswig-Holstein question was such an option. It was not a case of war with Austria, as Bismarck claimed. It was merely an option of enhanced Prussian dominance in that region of northern Germany. There were several reasons for Bismarck desiring to go to war in alliance with Austria against Denmark. First, it would be a testing point for the strength of the Prussian army. Furthermore, Prussia could not hope to fight Denmark alone without the threat of Austrian interference. To fight as an ally of Austria, on the other hand, would mean that when the war was over, the peace settlement could be used as another possible option to gain further hegemony in the region. Because Bismarck saw in the Schleswig-Holstein question a possibility for war with Austria, he therefore set about isolating the latter state. He already had Russian support because of the moral backing Prussia had given to Russia during the Polish uprising of 1863. He gained the support of Italy in return for the promise of Venetia in the event of an Austrian defeat. He gained a promise of neutrality from Louis Napoleon by offering France territorial compensation in the event of a victory over Austria. The ensuing war lasted only seven weeks and Austria was defeated. Prussia had mobilised quickly, had superior armaments and good railways with which to rush troops to the frontier. The alliance with Italy forced Austria to fight on two fronts. Austria, on the other hand, had been completely isolated and could look nowhere for support. Bismarck nevertheless was anxious that the war did not escalate into a major European confrontation because of the upset of the balance of power. He was determined to bring the war to an end as quickly as possible and therefore could not allow the humiliation of Austria. Besides which, he did not want Austria to regard Germany as her permanent enemy. He also feared a future war with France, and needed Austria's neutrality in that. The Treaty of Prague, signed with Austria in August 1866, was therefore extremely lenient. He had, after all, gained what he had wanted: the expulsion of Austria from German affairs. Napoleon III had totally underestimated the power of the Prussian army. Instead of the war lasting for several years, as he had expected, it was over in only seven weeks. Napoleon realised his mistake only when it was too late. By failing to help Austria, he had also succeeded in alienating both Austria and his own Catholic population, as well as helping to create a powerful and almost unified Germany on his own doorstep. He had to gain something from the war so as to restore some of his lost prestige. He therefore demanded his territorial compensation. Bismarck manipulated his demands into a cause of war, knowing that France would be totally isolated. The German forces proved far superior to the French, who were badly commanded and armed with inferior weaponry, and the war was quickly over. By September 1870 Napoleon III surrendered to the German army. In Paris a revolution proclaimed another republic (3rd Republic) and continued the war under siege until January 1871. As a consequence of the war, German unification was completed since the South German Confederation chose to become united with the north during the war. An empire was now proclaimed with William I as Emperor (2nd Reich). Louis Napoleon, on the other hand, had the anguish of losing his empire in the midst of revolution and humiliation. David Thomson makes the point that the annexation of the French territories of Alsace and Lorraine were not part of Bismarck's original scheme for the unification of Germany. Bismarck was a nationalist, albeit a Prussian one. The majority of the population in these two provinces were French and they would therefore be an embarrassing minority within the German Reich. He nevertheless succumbed to pressure from his military generals who wanted the provinces for strategic purposes. Acceptance of their advice was possibly the greatest error which Bismarck made in his distinguished career. The annexation ensured that the German government would have to support a policy of strict alliances if it wished to maintain French isolation. That would be fine in the short term for a diplomat of Bismarck's capacity. What, however, would happen if a lesser mortal were in control of the German state? The sole exception is for educational institutions wishing to reproduce the document as a handout for their students.
Are you stuck not knowing how to draw a linear equation without using a calculator? Luckily, drawing a graph of a linear equation is pretty simple once you know how. All you need to know is a couple things about your equation and you're good to go. Let's get started. 1Make sure the linear equation is in the form y = mx + b. This is called the y-intercept form, and it's probably the easiest form to use to graph linear equations. The values in the equation do not need to be whole numbers. Often you'll see an equation that looks like this: y = 1/4x + 5, where 1/4 is m and 5 is b. - m is called the "slope," or sometimes "gradient." Slope is defined as rise over run, or the change in y over the change in x. - b is defined as the "y-intercept." The y-intercept is the point at which the line crosses the Y-axis. - x and y are both variables. You can solve for a specific value of x, for example, if you have a y point and know the m and b values. x, however, is never merely one value: its value changes as you go up or down the line. 2Plot the b number on the Y-axis. Your b is always going to be a rational number. Just whatever number b is, find its equivalent on the Y-axis, and put the number on that spot on the vertical axis. - For example, let's take the equation y = 1/4x + 5. Since the last number is b, we know that b equals 5. Go 5 points up on the Y-axis and mark the point. This is where your straight line will pass through the Y-axis. 3Convert m into a fraction. Often, the number in front of x is already a fraction, so you won't have to convert it. But if it isn't, convert it by simply placing the value of m over 1. - The first number (numerator) is the rise in rise over run. It's how far the line travels up, or vertically. - The second number (denominator) is the run in rise over run. It's how far the line travels to the side, or horizontally. - For example: - A 4/1 slope travels 4 points up for every 1 point over. - A -2/1 slope travels 2 points down for every 1 point over. - A 1/5 slope travels 1 points up for every 5 points over. 4Start extending the line from b using slope, or rise over run. Start at your b value: we know that the equation passes through this point. Extend the line by taking your slope and using its values to get points on the equation. - For example, using the illustration above, you can see that for every 1 point the line rises up, it travels 4 to the right. That's because the slope of the line is 1/4. You extend the line indefinitely along both sides, continuing to use rise over run to graph the line. - Whereas positive-value slopes travel upward, negative-value slopes travel downward. A slope of -1/4, for example, would travel down 1 point for every 4 points it travels side to side. 5Continue extending the line, using a ruler and being sure to use the slope, m, as a guide. Extend the line indefinitely and you're done graphing your linear equation. Pretty easy, isn't it?Ad We could really use your help! fashion and clothing? In other languages: Español: Cómo graficar una ecuación lineal, Português: Como Fazer o Gráfico de uma Equação Linear, Italiano: Come Rappresentare Graficamente un'Equazione Lineare, Deutsch: Lineare Gleichungen grafisch darstellen, 中文: 给线性方程作图, Русский: построить график линейного уравнения, Français: Comment représenter graphiquement une équation linéaire Thanks to all authors for creating a page that has been read 120,534 times.
History of Finland |This article is part of a series on| |Former political entities| The land area that now makes up Finland was settled immediately after the last ice age, which ended in 9000 BC. Most of the region was a part of the Kingdom of Sweden from the 13th century to 1809, when the vast majority of the Finnish-speaking areas of Sweden were ceded to the Russian Empire (excluding the Finnish-speaking areas of the modern-day Northern Sweden), making this area the autonomous Grand Duchy of Finland. The Lutheran religion dominated. Finnish nationalism emerged, focused on Finnish cultural traditions, including music and—especially—the highly distinctive language and lyrics associated with it. The catastrophic Finnish famine of 1866–1868 was followed by eased economic regulations and extensive emigration. In 1917, Finland declared independence. A civil war between the Finnish Red Guards and the White Guard ensued a few months later, with the "Whites" gaining the upper hand during the springtime of 1918. After the internal affairs stabilized, the still mainly agrarian economy grew relatively fast. Relations with the West, especially Sweden and Britain, were strong but tensions remained with the Soviet Union. During the Second World War, Finland fought twice against the Soviet Union and defended its independence, though in the 1947 peace settlement, it ended up ceding a large part of Karelia and some other areas to the Soviet Union. However, Finland remained an independent democracy in North Europe. In the latter half of its independent history, Finland has maintained a mixed economy. Since its post-World War II economic boom in the 1970s, Finland's GDP per capita has been among the world's highest. The expanded welfare state of Finland from 1970 and 1990 increased the public sector employees and spending and the tax burden imposed on the citizens. In 1992, Finland simultaneously faced economic overheating and depressed Western, Russian, and local markets. Finland joined the European Union in 1995, and replaced the Finnish markka with the euro in 2002. According to a 2005 poll, most Finns at that point were reluctant to join NATO. - 1 Prehistory - 2 Middle Ages - 3 16th century - 4 17th century – the Swedish Empire - 5 18th century – the Age of Enlightenment - 6 Peasants - 7 Historical population of Finland - 8 Russian Grand Duchy - 9 Independence and Civil War - 10 Finland in the inter-war era - 11 Finland in the Second World War - 12 Postwar - 13 Recent history - 14 See also - 15 References - 16 Further reading - 17 External links If confirmed, the oldest archeological site in Finland would be the Wolf Cave in Kristinestad, in Ostrobothnia. Excavations are currently underway, and if the so far presented estimates hold true, the site would be the only pre-glacial (Neanderthal) site so far discovered in the Nordic Countries, and it is approximately 125,000 years old . The last ice age in the area of the modern-day Finland ended c. 9000 BC. Starting about that time, people migrated to the area of Finland from the Kunda and—possibly—Swiderian cultures, and they are believed to be ancestors of today's Finnish and Sami people in Finland. The oldest confirmed evidence of the post-glacial human settlements in Finland are from the area of Ristola in Lahti and from Orimattila, from c. 8900 BC. Finland has been continuously inhabited at least since the end of the last ice age, up to the present. The earliest post-glacial inhabitants of the present-day area of Finland were probably mainly seasonal hunter-gatherers. Their artifacts discovered are known to represent the Suomusjärvi and the Kunda cultures. Among finds is the net of Antrea, the oldest fishing net known ever to have been excavated (calibrated carbon dating: ca. 8300 BC). By 5300 BC pottery was present in Finland. The earliest samples belong to the Comb Ceramic Cultures, known for their distinctive decorating patterns. This marks the beginning of the neolithic period for Finland, although subsistence was still based on hunting and fishing. Extensive networks of exchange existed across Finland and northeastern Europe during the 5th millennium BC. For example, flint from Scandinavia and the Valdai Hills, amber from Scandinavia and the Baltic region, and slate from Scandinavia and Lake Onega found their way into Finnish archaeological sites, while asbestos and soap stone from Finland (e.g. the area of Saimaa) were found in other regions. Rock paintings — apparently related to shamanistic and totemistic belief systems — have been found, especially in Eastern Finland, e.g. Astuvansalmi. Between 3500 and 2000 BC, monumental stone enclosures colloquially known as Giant's Churches (Finnish: Jätinkirkko) were constructed in the Ostrobothnia region. The purpose of the enclosures is unknown. From 3200 BC onwards, either immigrants or a strong cultural influence from south of the Gulf of Finland settled in southwestern Finland. This culture was a part of the European Battle Axe cultures, which have often been associated with the movement of the Indo-European speakers. The Battle Axe, or Cord Ceramic, culture seems to have practiced agriculture and animal husbandry outside of Finland, but the earliest confirmed traces of agriculture in Finland date later, approximately to the 2nd millennium BC. Further inland, the societies retained their hunting-gathering lifestyles for the time being. The Battle Axe and Comb Ceramic cultures eventually merged, giving rise to the Kiukainen culture that existed between 2300 BC and 1500 BC, and was fundamentally a comb ceramic tradition with cord ceramic characteristics. The Bronze Age began some time after 1500 BC. The coastal regions of Finland were a part of the Nordic Bronze Culture, whereas in the inland regions the influences came from the bronze-using cultures of northern and eastern Russia. The Iron Age in Finland is considered to last from c.500 BC until c.1300 AD when known official and written records of Finland become more common due to the Swedish invasions as part of the Northern Crusades in the 13th century. As the Finnish Iron Age lasted almost two millennia, it is further divided into six sub periods: - Pre-Roman period: 500 BC - "0" - Roman period: "0" - 400 AD - Migration period: 400 AD - 575 AD - Merovingian period: 575 AD - 800 AD - Crusade period: 1033 AD - 1300 AD Very few written records of Finland or its people remain in any language of the era, but this is especially true of Finnic languages, which were only phonetically transliterated at the time. Primary written sources are thus mostly of foreign origin, most informative of which include Tacitus' description of Fenni in his Germania, the sagas written down by Snorri Sturluson, as well as the 12th- and 13th-century ecclesiastical letters written for Finns. Numerous other sources from the Roman period onwards contain brief mentions of ancient (and probably also mythological) Finnish kings and place names, as such defining Finland as a kingdom and noting the culture of its people. Currently the oldest known Scandinavian documents mentioning a "land of the Finns" are two runestones: Söderby, Sweden, with the inscription finlont (U 582 †), and Gotland with the inscription finlandi (G 319 M) dating from the 11th century. However, as the long continuum of the Finnish Iron Age into the historical Medieval period of Europe suggests, the primary source of information of the era in Finland is based on archaeological findings and modern applications of natural scientific methods like those of DNA analysis or computer linguistics. Production of iron during the Finnish Iron Age was adopted from the neighboring cultures in the east, west and south about the same time as the first imported iron artifacts appear. This happened almost simultaneously in various parts of the country. Pre-Roman period: 500 BC - 1 BC The Pre-Roman period of the Finnish Iron Age is scarcest in findings, but the known ones suggest that cultural connections to other Baltic cultures were already established. for which the findings of Pernaja and Savukoski provide solid argument. Many of the era's dwelling sites are the same as those of the Neolithic. Most of the iron of the era was produced on site. Roman period: 1 AD - 400 AD The Roman period brought along an influx of imported iron (and other) artifacts like Roman wine glasses and dippers as well as various coins of the Empire. During this period the (proto) Finnish culture stabilized on the coastal regions and larger graveyards become commonplace. The prosperity of the Finns rose to the level that the vast majority of gold treasures found within Finland date back to this period. Migration period: 400 AD - 575 AD The Migration period saw the expansion of land cultivation inland, especially in Southern Bothnia, and the growing influence of Germanic cultures, both in artifacts like swords and other weapons and in burial customs. However most iron as well as its forging was of domestic origin, probably from bog iron. Merovingian period: 575 AD - 800 AD The Merovingian period in Finland gave rise to distinctive fine crafts culture of its own, visible in the original decorations of domestically produced weapons and jewelry. Finest luxury weapons were, however, imported from Western Europe. The very first Christian burials are from the latter part of this era as well. The Leväluhta burial findings suggest that the average height of a man was 158 cm and that of a woman was 147 cm. Recent findings suggest that Finnish trade connections already became more active during the 8th century bringing an influx of silver onto Finnish markets. The opening of the eastern route to Constantinople via Finland's southern coastline archipelago brought Arabic and Byzantine artifacts into the excavation findings of the era. The earliest findings of imported iron blades and local iron working appear in 500 BC. From about 50 AD, there are indications of a more intense long-distance exchange of goods in coastal Finland. Inhabitants exchanged their products, presumably mostly furs, for weapons and ornaments with the Balts and the Scandinavians as well as with the peoples along the traditional eastern trade routes. The existence of richly furnished burials, usually with weapons, suggests that there was a chiefly elite in the southern and western parts of the country. Hillforts spread over most of southern Finland at the end of the Iron and early Medieval Age. There is no commonly accepted evidence of early state formations in Finland, and the presumably Iron Age origins of urbanization are contested. Chronology of languages in Finland The question of the timelines for the evolution and the spreading of the current Finnish languages is controversial, and new theories challenging older ones have been introduced continuously. It is widely believed that Finno-Ugric (or Uralic) languages were first spoken in Finland and the adjacent areas during the Comb Ceramic period, around 4000 BC at the latest. During the 2nd millennium BCE these evolved — possibly under an Indo-European (most likely Baltic) influence — into proto-Sami (inland) and Proto-Finnic (coastland). However, this theory has been increasingly contested among comparative linguists. It has been suggested instead that the Finno-Ugric languages arrived in the Gulf of Finland area much later, perhaps around 2000 BCE or later in the Bronze Age, as result of an early Bronze Age Uralic language expansion possibly connected to the Seima-Turbino phenomenon. This would also imply that Finno-Ugric languages in Finland were preceded by a North-Western Indo-European language, at least to the extent the latter can be associated with the Cord Ceramic culture, as well as by hitherto unknown Paleo-European languages. The center of expansion for the Proto-Finnic language is posited to have been located on the southern coast of the Gulf of Finland. The Finnish language is thought to have started to differentiate during the Iron Age starting from the earliest centuries of the Common Era. Cultural influences from a variety of places are visible in the Finnish archaeological finds from the very first settlements onwards. For example, archaeological finds from Finnish Lapland suggest the presence of the Komsa culture from Norway. The Sujala finds, which are equal in age with the earliest Komsa artifacts, may also suggest a connection to the Swiderian culture. Southwestern Finland belonged to the Nordic Bronze Age, which may be associated with Indo-European languages, and according to Finnish Germanist Jorma Koivulehto speakers of Proto-Germanic language in particular. Artifacts found in Kalanti and the province of Satakunta, which have long been monolingually Finnish, and their place names have made several scholars argue for an existence of a proto-Germanic speaking population component a little later, during the Early and Middle Iron Age. An Old Norse-speaking population settled parts of Finland's coastal areas in the 12th to 13th centuries. The Swedish language differentiated from the eastern Norse dialects by the 13th century. During the subsequent Swedish reign over Finland particularly the coastal areas witnessed waves of settlement from Sweden. Contact between Sweden and what is now Finland was considerable even during pre-Christian times; the Vikings were known to the Finns due to their participation in both commerce and plundering. There is possible evidence of Viking settlement in the Finnish mainland. The Åland Islands probably had Swedish settlement during the Viking Period. However, some scholars claim that the archipelago was deserted during the 11th century. According to the archaeological finds, Christianity gained a foothold in Finland during the 11th century. According to the very few written documents that have survived, the church in Finland was still in its early development in the 12th century. Later medieval legends describe Swedish attempts to conquer and Christianize Finland sometime in the mid-1150s. In the early 13th century, Bishop Thomas became the first bishop of Finland. There were several secular powers who aimed to bring the Finns under their rule. These were Sweden, Denmark, the Republic of Novgorod in northwestern Russia, and probably the German crusading orders as well. Finns had their own chiefs, but most probably no central authority. Russian chronicles indicate there were conflicts between Novgorod and the Finnic tribes from the 11th or 12th century to the early 13th century. The name "Finland" originally signified only the southwestern province that has been known as "Finland Proper" since the 18th century. Österland (lit. Eastern Land) was the original name for the Swedish realm's eastern part, but already in the 15th century Finland began to be used synonymously with Österland. The concept of a Finnish "country" in the modern sense developed slowly from the 15th to 18th centuries. It was the Swedish regent, Birger Jarl, who allegedly established Swedish rule in Finland through the Second Swedish Crusade, most often dated to 1249. It has been suggested that the "crusade" was aimed at Tavastians who had stopped being Christian and returned to their old ethnic faith. Novgorod gained control in Karelia, the region inhabited by speakers of Eastern Finnish dialects. Sweden however gained the control of Western Karelia with the Third Finnish Crusade in 1293. Western Karelians were from then on viewed as part of the western cultural sphere, while eastern Karelians turned culturally to Russia and Orthodoxy. While eastern Karelians remain linguistically and ethnically closely related to the Finns, they are considered a people of their own by most. Thus, the northern border between Catholic and Orthodox Christendom came to lie at the eastern border of what would become Finland with the Treaty of Nöteborg in 1323. During the 13th century, Finland was integrated into medieval European civilization. The Dominican order arrived in Finland around 1249 and came to exercise huge influence there. In the early 14th century, the first documents of Finnish students at Sorbonne appear. In the southwestern part of the country, an urban settlement evolved in Turku. Turku was one of the biggest towns in the Kingdom of Sweden, and its population included German merchants and craftsmen. Otherwise the degree of urbanization was very low in medieval Finland. Southern Finland and the long coastal zone of the Bothnian Gulf had a sparse farming settlements, organized as parishes and castellanies. In the other parts of the country a small population of Sami hunters, fishermen and small-scale farmers lived. These were exploited by the Finnish and Karelian tax collectors. During the 12th and 13th centuries, great numbers of Swedish settlers moved to the southern and northwestern coasts of Finland, to the Åland Islands, and to the archipelago between Turku and the Åland Islands. In these regions, the Swedish language is widely spoken even today. Swedish came to be the language of the upper class in many other parts of Finland as well. During the 13th century, the bishopric of Turku was established. The cathedral of Turku was the center of the cult of Saint Henry, and naturally the cultural center of the bishopric. The bishop had the ecclesiastical authority over much of today's Finland and was usually the most powerful man there. Bishops were often Finns, whereas the commanders in the castles were more often Scandinavian or German noblemen. In 1362, representatives from Finland were called to participate in the elections for the king of Sweden. As such, that year is often considered when Finland was incorporated into the Kingdom of Sweden. As in the Scandinavian part of the kingdom, the gentry or (lower) nobility consisted of magnates and yeomen who could afford armament for a man and a horse; these were concentrated in the southern part of Finland. The strong fortress of Viborg (Finnish: Viipuri, Russian: Vyborg) guarded the eastern border of Finland. Sweden and Novgorod signed the Treaty of Nöteborg (Pähkinäsaari in Finnish) in 1323, but that would not last long. In 1348 the Swedish king Magnus Eriksson staged a failed crusade against the Orthodox "heretics", managing only to alienate his supporters and ultimately lose his crown. The bones of contention between Sweden and Novgorod were the northern coastline of the Bothnian Gulf and the wilderness regions of Savo in Eastern Finland. Novgorod considered these as hunting and fishing grounds of its Karelian subjects, and protested against the slow infiltration of Catholic settlers from the West. Occasional raids and clashes between Swedes and Novgorodians occurred during the late 14th and 15th centuries, but for most of the time an uneasy peace prevailed. During the 1380s, a civil war in the Scandinavian part of Sweden brought unrest to Finland as well. The victor of this struggle was Queen Margaret I of Denmark, who brought the three Scandinavian kingdoms of Sweden, Denmark and Norway under her rule (the "Kalmar Union") in 1389. The next 130 years or so were characterized by attempts of different Swedish factions to break out of the Union. Finland was sometimes involved in these struggles, but in general the 15th century seems to have been a relatively prosperous time, characterized by population growth and economic development. Towards the end of the 15th century, however, the situation on the eastern border became more tense. The Principality of Moscow conquered Novgorod, preparing the way for a unified Russia, and from 1495–1497 a war was fought between Sweden and Russia. The fortress-town of Viborg stood against a Russian siege; according to a contemporary legend, it was saved by a miracle. In 1521 the Kalmar Union collapsed and Gustav Vasa became the King of Sweden. During his rule, the Swedish church was reformed (1527). The state administration underwent extensive reforms and development too, giving it a much stronger grip on the life of local communities—and ability to collect higher taxes. Following the policies of the Reformation, in 1551 Mikael Agricola, bishop of Turku, published his translation of the New Testament into the Finnish language. King Gustav Vasa died in 1560 and his crown was passed to his three sons in separate turns. King Erik XIV started an era of expansion when the Swedish crown took the city of Tallinn in Estonia under its protection in 1561. The Livonian War was the beginning of a warlike era which lasted for 160 years. In the first phase, Sweden fought for the lordship of Estonia and Latvia against Denmark, Poland and Russia. The common people of Finland suffered because of drafts, high taxes, and abuse by military personnel. This resulted in the Cudgel War of 1596–1597, a desperate peasant rebellion, which was suppressed brutally and bloodily. A peace treaty (the Treaty of Teusina) with Russia in 1595 moved the border of Finland further to the east and north, very roughly where the modern border lies. An important part of the 16th century history of Finland was growth of the area settled by the farming population. The crown encouraged farmers from the province of Savonia to settle the vast wilderness regions in Middle Finland. This was done, and the original Sami population often had to leave. Some of the wilderness settled was traditional hunting and fishing territory of Karelian hunters. During the 1580s, this resulted in a bloody guerrilla warfare between the Finnish settlers and Karelians in some regions, especially in Ostrobothnia. 17th century – the Swedish Empire In 1611–1632 Sweden was ruled by King Gustavus Adolphus, whose military reforms transformed the Swedish army from a peasant militia into an efficient fighting machine, possibly the best in Europe. The conquest of Livonia was now completed, and some territories were taken from internally divided Russia in the Treaty of Stolbova. In 1630, the Swedish (and Finnish) armies marched into Central Europe, as Sweden had decided to take part in the great struggle between Protestant and Catholic forces in Germany, known as the Thirty Years' War. The Finnish light cavalry was known as the Hakkapeliitat. - 1637–1640 and 1648–1654 Count Per Brahe functioned as general governor of Finland. Many important reforms were made and many towns were founded. His period of administration is generally considered very beneficial to the development of Finland. - 1640 Finland's first university, the Academy of Åbo, was founded in Turku at the proposal of Count Per Brahe by Queen Christina of Sweden. - 1642 The whole Bible was published in Finnish. However, the high taxation, continuing wars and the cold climate (the Little Ice Age) made the Imperial era of Sweden rather gloomy times for Finnish peasants. In 1655–1660, the Northern Wars were fought, taking Finnish soldiers to the battle-fields of Livonia, Poland and Denmark. In 1676, the political system of Sweden was transformed into an absolute monarchy. In Middle and Eastern Finland, great amounts of tar were produced for export. European nations needed this material for the maintenance of their fleets. According to some theories, the spirit of early capitalism in the tar-producing province of Ostrobothnia may have been the reason for the witch-hunt wave that happened in this region during the late 17th century. The people were developing more expectations and plans for the future, and when these were not realized, they were quick to blame witches—according to a belief system the Lutheran church had imported from Germany. The Empire had a colony in the New World in the modern-day Delaware-Pennsylvania area between 1638–1655. At least half of the immigrants were of Finnish origin. The 17th century was an era of very strict Lutheran orthodoxy. In 1608, the law of Moses was declared the law of the land, in addition to secular legislation. Every subject of the realm was required to confess the Lutheran faith and church attendance was mandatory. Ecclesiastical penalties were widely used. The rigorous requirements of orthodoxy were revealed in the dismissal of the Bishop of Turku, Johan Terserus, who wrote a catechism which was decreed heretical in 1664 by the theologians of the Academy of Åbo. On the other hand, the Lutheran requirement of the individual study of Bible prompted the first attempts at wide-scale education. The church required from each person a degree of literacy sufficient to read the basic texts of the Lutheran faith. Although the requirements could be fulfilled by learning the texts by heart, also the skill of reading became known among the population. In 1696–1699, a famine caused by climate decimated Finland. A combination of an early frost, the freezing temperatures preventing grain from reaching Finnish ports, and a lackluster response from the Swedish government saw about one-third of the population die. Soon afterwards, another war determining Finland's fate began (the Great Northern War of 1700–21). 18th century – the Age of Enlightenment The Great Northern War (1700–1721) was devastating, as Sweden and Russia fought for control of the Baltic. Harsh conditions—worsening poverty and repeated crop failures—among peasants undermined support for the war, leading to Sweden's defeat. Finland was a battleground as both armies ravaged the countryside, leading to famine, epidemics, social disruption and the loss of nearly half the population. By 1721 only 250,000 remained. Landowners had to pay higher wages to keep their peasants. Russia was the winner, annexing the south-eastern part, including the town of Viborg, after the Treaty of Nystad. The border with Russia came to lie roughly where it returned to after World War II. Sweden's status as a European great power was forfeit, and Russia was now the leading power in the North. The absolute monarchy was ended in Sweden. During this Age of Liberty, the Parliament ruled the country, and the two parties of the Hats and Caps struggled for control leaving the lesser Court party, i.e. parliamentarians with close connections to the royal court, with little to no influence. The Caps wanted to have a peaceful relationship with Russia and were supported by many Finns, while other Finns longed for revenge and supported the Hats. Finland by this time was depopulated, with a population in 1749 of 427,000. However, with peace the population grew rapidly, and doubled before 1800. 90% of the population were typically classified as "peasants", most being free taxed yeomen. Society was divided into four Estates: peasants (free taxed yeomen), the clergy, nobility and burghers. A minority, mostly cottagers, were estateless, and had no political representation. Forty-five percent of the male population were enfranchised with full political representation in the legislature—although clerics, nobles and townsfolk had their own chambers in the parliament, boosting their political influence and excluding the peasantry on matters of foreign policy. The mid-18th century was a relatively good time, partly because life was now more peaceful. However, during the Lesser Wrath (1741–1742), Finland was again occupied by the Russians after the government, during a period of Hat party dominance, had made a botched attempt to reconquer the lost provinces. Instead the result of the Treaty of Åbo was that the Russian border was moved further to the west. During this time, Russian propaganda hinted at the possibility of creating a separate Finnish kingdom. Both the ascending Russian Empire and pre-revolutionary France aspired to have Sweden as a client state. Parliamentarians and others with influence were susceptible to taking bribes which they did their best to increase. The integrity and the credibility of the political system waned, and in 1771 the young and charismatic king Gustav III staged a coup d'état, abolished parliamentarism and reinstated royal power in Sweden—more or less with the support of the parliament. In 1788, he started a new war against Russia. Despite a couple of victorious battles, the war was fruitless, managing only to bring disturbance to the economic life of Finland. The popularity of King Gustav III waned considerably. During the war, a group of officers made the famous Anjala declaration demanding peace negotiations and calling of Riksdag (Parliament). An interesting sideline to this process was the conspiracy of some Finnish officers, who attempted to create an independent Finnish state with Russian support. After an initial shock, Gustav III crushed this opposition. In 1789, the new constitution of Sweden strengthened the royal power further, as well as improving the status of the peasantry. However, the continuing war had to be finished without conquests—and many Swedes now considered the king as a tyrant. With the interruption of the war (1788–1790), the last decades of the 18th century had been an era of development in Finland. New things were changing even everyday life, such as starting of potato farming after the 1750s. New scientific and technical inventions were seen. The first hot air balloon in Finland (and in the whole Swedish kingdom) was made in Oulu (Uleåborg) in 1784, only a year after it was invented in France. Trade increased and the peasantry was growing more affluent and self-conscious. The Age of Enlightenment's climate of broadened debate in the society on issues of politics, religion and morals would in due time highlight the problem that the overwhelming majority of Finns spoke only Finnish, but the cascade of newspapers, belles-lettres and political leaflets was almost exclusively in Swedish—when not in French. The two Russian occupations had been harsh and were not easily forgotten. These occupations were a seed of a feeling of separateness and otherness, that in a narrow circle of scholars and intellectuals at the university in Turku was forming a sense of a separate Finnish identity representing the eastern part of the realm. The shining influence of the Russian imperial capital Saint Petersburg was also much stronger in southern Finland than in other parts of Sweden, and contacts across the new border dispersed the worst fears for the fate of the educated and trading classes under a Russian régime. At the turn of the 19th century, the Swedish-speaking educated classes of officers, clerics and civil servants were mentally well prepared for a shift of allegiance to the strong Russian Empire. King Gustav III was assassinated in 1792, and his son Gustav IV Adolf assumed the crown after a period of regency. The new king was not a particularly talented ruler; at least not talented enough to steer his kingdom through the dangerous era of the French Revolution and Napoleonic wars. Meanwhile, the Finnish areas belonging to Russia after the peace treaties in 1721 and 1743 (not including Ingria), called "Old Finland" were initially governed with the old Swedish laws (a not uncommon practice in the expanding Russian Empire in the 18th century). However, gradually the rulers of Russia granted large estates of land to their non-Finnish favorites, ignoring the traditional landownership and peasant freedom laws of Old Finland. There were even cases where the noblemen punished peasants corporally, for example by flogging. The overall situation caused decline in the economy and morale in Old Finland, worsened since 1797 when the area was forced to send men to the Imperial Army. The construction of military installations in the area brought thousands of non-Finnish people to the region. In 1812, after the Russian conquest of Finland, "Old Finland" was rejoined to the rest of the country but the landownership question remained a serious problem until the 1870s. While the king of Sweden sent in his governor to rule Finland, in day to day reality the villagers ran their own affairs using traditional local assemblies (called the ting) which selected a local "lagman", or lawman, to enforce the norms. The Swedes used the parish system to collect taxes. The socken (local parish) was at once a community religious organization and a judicial district that administered the king's law. The ting participated in the taxation process; taxes were collected by the bailiff, a royal appointee. In contrast to serfdom in Germany and Russia, the Finnish peasant was typically a freeholder who owned and controlled his small plot of land. There was no serfdom in which peasants were permanently attached to specific lands, and were ruled by the owners of that land. In Finland (and Sweden) the peasants formed one of the four estates and were represented in the parliament. Outside the political sphere, however, the peasants were considered at the bottom of the social order—just above vagabonds. The upper classes looked down on them as excessively prone to drunkenness and laziness, as clannish and untrustworthy, and especially as lacking honor and a sense of national spirit. This disdain dramatically changed in the 19th century when everyone idealised the peasant as the true carrier of Finnishness and the national ethos, as opposed to the Swedish-speaking elites. The peasants were not passive; they were proud of their traditions and would band together and fight to uphold their traditional rights in the face of burdensome taxes from the king or new demands by the landowning nobility. The great "Club War" in the south in 1596–1597 attacked the nobles and their new system of state feudalism; this bloody revolt was similar to other contemporary peasant wars in Europe. In the north, there was less tension between nobles and peasants and more equality among peasants, due to the practice of subdividing farms among heirs, to non farm economic activities, and to the small numbers of nobility and gentry. Often the nobles and landowners were paternalistic and helpful. The Crown usually sided with the nobles, but after the "restitution" of the 1680s it ended the practice of the nobility extracting labor from the peasants and instead began a new tax system whereby royal bureaucrats collected taxes directly from the peasants, who disliked the efficient new system. After 1800 growing population pressure resulted in larger numbers of poor crofters and landless laborers and the impoverishment of small farmers. Historical population of Finland - 1150: 20,000–40,000 - 1550: 300,000 - 1750: 428,000 - 1770: 561,000 - 1790: 706,000 - 1810: 863,000 - 1830: 1,372,000 - 1850: 1,637,000 - 1870: 1,769,000 - 1890: 2,380,000 - 1910: 2,943,000 - 1930: 3,463,000 - 1950: 4,030,000 - 1970: 4,598,000 - 1990: 4,977,000 - 2010: 5,375,000 - 2015: 5.5 Million Russian Grand Duchy During the Finnish War between Sweden and Russia, Finland was again conquered by the armies of Tsar Alexander I. The four Estates of occupied Finland were assembled at the Diet of Porvoo on March 29, 1809 to pledge allegiance to Alexander I of Russia. Following the Swedish defeat in the war and the signing of the Treaty of Fredrikshamn on September 17, 1809, Finland remained a Grand Duchy in the Russian Empire until the end of 1917, with the czar as Grand Duke. Russia assigned Karelia ("Old Finland") to the Grand Duchy in 1812. During the years of Russian rule the degree of autonomy varied. Periods of censorship and political prosecution occurred, particularly in the two last decades of Russian control, but the Finnish peasantry remained free (unlike the Russian serfs) as the old Swedish law remained effective (including the relevant parts from Gustav III's Constitution of 1772). The old four-chamber Diet was re-activated in the 1860s agreeing to supplementary new legislation concerning internal affairs. Before 1860 overseas merchant firms and the owners of landed estates had accumulated wealth that became available for industrial investments. After 1860 the government liberalized economic laws and began to build a suitable physical infrastructure of ports, railroads and telegraph lines. The domestic market was small but rapid growth took place after 1860 in export industries drawing on forest resources and mobile rural laborers. Industrialization began during the mid-19th century from forestry to industry, mining and machinery and laid the foundation of Finland's current day prosperity, even though agriculture employed a relatively large part of the population until the post–World War II era. The beginnings of industrialism took place in Helsinki. Alfred Kihlman (1825–1904) began as a Lutheran priest and director of the elite Helsingfors boys' school, the Swedish Normal Lyceum. He became a financier and member of the diet. There was little precedent in Finland in the 1850s for raising venture capital. Kihlman was well connected and enlisted businessmen and capitalists to invest in new enterprises. In 1869, he organized a limited partnership that supported two years of developmental activities that led to the founding of the Nokia company in 1871. After 1890 industrial productivity stagnated because entrepreneurs were unable to keep up with technological innovations made by competitors in Germany, Britain and the United States. However, Russification opened up a large Russian market especially for machinery. The Finnish national awakening in the mid-19th century was the result of members of the Swedish-speaking upper classes deliberately choosing to promote Finnish culture and language as a means of nation building, i.e. to establish a feeling of unity among all people in Finland including (and not of least importance) between the ruling elite and the ruled peasantry. The publication in 1835 of the Finnish national epic, the Kalevala, a collection of traditional myths and legends which is the folklore of the Karelian people (the Finnic Eastern Orthodox people who inhabit the Lake Ladoga-region of eastern Finland and present-day NW Russia), stirred the nationalism that later led to Finland's independence from Russia. Particularly following Finland's incorporation into the Swedish central administration during the 16th and 17th centuries, Swedish was spoken by about 15% of the population, especially the upper and middle classes. Swedish was the language of administration, public institutions, education and cultural life. Only the peasants spoke Finnish. The emergence of Finnish to predominance resulted from a 19th-century surge of Finnish nationalism, aided by Russian bureaucrats attempting to separate Finns from Sweden and to ensure the Finns' loyalty. In 1863, the Finnish language gained an official position in administration. In 1892 Finnish finally became an equal official language and gained a status comparable to that of Swedish. Nevertheless, the Swedish language continued to be the language of culture, arts and business all the way to the 1920s. Movements toward Finnish national pride, as well as liberalism in politics and economics involved ethnic and class dimensions. The nationalist movement against Russia began with the Fennoman movement led by Hegelian philosopher Johan Vilhelm Snellman in the 1830s. Snellman sought to apply philosophy to social action and moved the basis of Finnish nationalism to establishment of the language in the schools, while remaining loyal to the czar. Fennomania became the Finnish Party in the 1860s. Liberalism was the central issue of the 1860s to 1880s. The language issue overlapped both liberalism and nationalism, and showed some a class conflict as well, with the peasants pitted against the conservative Swedish-speaking landowners and nobles. As complications, the Finnish activists divided into "old" (no compromise on the language question and conservative nationalism) and "young" (liberation from Russia) Finns. The leading liberals were Swedish-speaking intellectuals who called for more democracy; they became the radical leaders after 1880. The liberals organized for social democracy, labor unions, farmer cooperatives, and women's rights. Nationalism was contested by the pro-Russian element and by the internationalism of the labor movement. The result was a tendency to class conflict over nationalism, but the early 1900s the working classes split into the Valpas (class struggle emphasis) and Mäkelin (nationalist emphasis). While the vast majority of Finns were Lutheran, there were two strains to Lutheranism that eventually merged to form the modern Finnish church. On the one hand was the high-church emphasis on ritual, with its roots in traditional peasant collective society. Paavo Ruotsalainen (1777–1852) on the other hand was a leader of the new pietism, with its subjectivity, revivalism, emphasis on personal morality, lay participation, and the social gospel. The pietism appealed to the emerging middle class. The Ecclesiastical Law of 1869 combined the two strains. Finland's political and Lutheran leaders considered both Eastern Orthodoxy and Roman Catholicism to be threats to the emerging nation. Orthodoxy was rejected as a weapon of Russification, while anti-Catholicism was long-standing. Anti-Semitism was also a factor, so the Dissenter Law of 1889 upgraded the status only of the minor Protestant sects. Before 1790 music was found in Lutheran churches and in folk traditions. In 1790 music lovers founded the Åbo Musical Society; it gave the first major stimulus to serious music by Finnish composers. In the 1880s, new institutions, especially the Helsinki Music Institute (since 1939 called the Sibelius Academy), the Institute of Music of Helsinki University and the Helsinki Philharmonic Orchestra, integrated Finland into the mainstream of European music. By far the most influential composer was Jean Sibelius (1865–1957); he composed nearly all his music before 1930. In April 1892 Sibelius presented his new symphony 'Kullervo' in Helsinki. It featured poetry from the Kalevala, and was celebrated by critics as truly Finnish music. Upper and upper middle class women took the lead in the deaconess movement in Finland. Coordinated by the Lutheran church, the women undertook local charitable social work to ameliorate harsh living conditions created by peasants adjusting to city life. They promoted nursing as a suitable profession for respectable women. Their efforts helped redefine the complex relationship between private charities and the traditional state and church responsibility for social welfare. Because they volunteered without pay and emphasized motherhood and nurturing as moral values for women, they contributed to the entrenchment of what in the 20th century became widespread gender roles. The policy of Russification of Finland (1899–1905 and 1908–1917, called sortokaudet/sortovuodet (times/years of oppression) in Finnish) was the policy of the Russian czars designed to limit the special status of the Grand Duchy of Finland and more fully integrate it politically, militarily, and culturally into the empire. Finns were strongly opposed and fought back by passive resistance and a strengthening of Finnish cultural identity. Key provisions were, first, the "February Manifesto of 1899" which asserted the imperial government's right to rule Finland without the consent of local legislative bodies; second, the "Language Manifesto of 1900" which made Russian the language of administration of Finland; and third, the conscription law of 1901 which incorporated the Finnish army into the imperial army and sent conscripts away to Russian training camps. In 1906, as a result of the Russian Revolution of 1905 and the associated Finnish general strike of 1905, the old four-chamber Diet was replaced by a unicameral Parliament of Finland (the "Eduskunta"). For the first time in Europe, universal suffrage (right to vote) and eligibility was implemented to include women: Finnish women were the first in Europe to gain full eligibility to vote; and have membership in an estate; land ownership or inherited titles were no longer required. However, on the local level things were different, as in the municipal elections the number of votes was tied to amount of tax paid. Thus, rich people could cast a number of votes, while the poor perhaps none at all. The municipal voting system was changed to universal suffrage in 1917 when a left-wing majority was elected to Parliament. Emigration was especially important 1890–1914, with many young men and some families headed to Finnish settlements in the United States, and also to Canada. They typically worked in lumbering and mining, and many were active in Marxist causes on the one hand, or the Finnish Evangelical Lutheran Church of America on the other. In the 21st century about 700,000 Americans and 110,000 Canadians claim Finnish ancestry. - 1880s: 26,000 - 1890s: 59,000 - 20th century: 159,000 - 1910s: 67,000 - 1920s: 73,000 - 1930s: 3,000 - 1940s: 7,000 - 1950s 32,000 By 2000 about 6% of the population spoke Swedish as their first language, or 300,000 people. However, since the late 20th century there has been a steady migration of older, better educated Swedish speakers to Sweden. Independence and Civil War In the aftermath of the February Revolution in Russia, Finland received a new Senate, and a coalition Cabinet with the same power distribution as the Finnish Parliament. Based on the general election in 1916, the Social Democrats had a small majority, and the Social Democrat Oskari Tokoi became prime minister. The new Senate was willing to cooperate with the Provisional government of Russia, but no agreement was reached. Finland considered the personal union with Russia to be over after the dethroning of the Tsar—although the Finns had de facto recognized the Provisional government as the Tsar's successor by accepting its authority to appoint a new Governor General and Senate. They expected the Tsar's authority to be transferred to Finland's Parliament, which the Provisional government refused, suggesting instead that the question should be settled by the Russian Constituent Assembly. For the Finnish Social Democrats it seemed as though the bourgeoisie was an obstacle on Finland's road to independence as well as on the proletariat's road to power. The non-Socialists in Tokoi's Senate were, however, more confident. They, and most of the non-Socialists in the Parliament, rejected the Social Democrats' proposal on parliamentarism (the so-called "Power Act") as being too far-reaching and provocative. The act restricted Russia's influence on domestic Finnish matters, but didn't touch the Russian government's power on matters of defence and foreign affairs. For the Russian Provisional government this was, however, far too radical, exceeding the Parliament's authority, and so the Provisional government dissolved the Parliament. The minority of the Parliament, and of the Senate, were content. New elections promised a chance for them to gain a majority, which they were convinced would improve the chances to reach an understanding with Russia. The non-Socialists were also inclined to cooperate with the Russian Provisional government because they feared the Social Democrats' power would grow, resulting in radical reforms, such as equal suffrage in municipal elections, or a land reform. The majority had the completely opposite opinion. They didn't accept the Provisional government's right to dissolve the Parliament. The Social Democrats held on to the Power Act and opposed the promulgation of the decree of dissolution of the Parliament, whereas the non-Socialists voted for promulgating it. The disagreement over the Power Act led to the Social Democrats leaving the Senate. When the Parliament met again after the summer recess in August 1917, only the groups supporting the Power Act were present. Russian troops took possession of the chamber, the Parliament was dissolved, and new elections were held. The result was a (small) non-Socialist majority and a purely non-Socialist Senate. The suppression of the Power Act, and the cooperation between Finnish non-Socialists and Russia provoked great bitterness among the Socialists, and had resulted in dozens of politically motivated attacks and murders. The October Revolution of 1917 turned Finnish politics upside down. Now, the new non-Socialist majority of the Parliament desired total independence, and the Socialists came gradually to view Soviet Russia as an example to follow. On November 15, 1917, the Bolsheviks declared a general right of self-determination "for the Peoples of Russia", including the right of complete secession. On the same day the Finnish Parliament issued a declaration by which it temporarily took power in Finland. Worried by developments in Russia and Finland, the non-Socialist Senate proposed that Parliament declare Finland's independence, which was voted by the Parliament on December 6, 1917. On December 18 (December 31 N. S.) the Soviet government issued a Decree, recognizing Finland's independence, and on December 22 (January 4, 1918 N. S.) it was approved by the highest Soviet executive body (VTsIK). Germany and the Scandinavian countries followed without delay. Finland after 1917 was bitterly divided along social lines. The Whites consisted of the Swedish-speaking middle and upper classes and the farmers and peasantry who dominated the northern two-thirds of the land. They had a conservative outlook and rejected socialism. The Socialist-Communist Reds comprised the Finnish-speaking urban workers and the landless rural cottagers. They had a radical outlook and rejected capitalism. From January to May 1918, Finland experienced the brief but bitter Finnish Civil War. On one side there were the "white" civil guards, who fought for the anti-Socialists. On the other side were the Red Guards, which consisted of workers and tenant farmers. The latter proclaimed a Finnish Socialist Workers' Republic. World War I was still underway and the defeat of the Red Guards was achieved with support from Imperial Germany, while Sweden remained neutral and Russia withdrew its forces. The Reds lost the war and the White peasantry rose to political leadership in the 1920s–1930s. About 37,000 men died, most of them in prisoner camps ravaged by influenza and other diseases. Finland in the inter-war era After the civil war the parliament, controlled by the Whites, voted to establish a constitutional monarchy to be called the Kingdom of Finland, with a German prince as king. However, Germany's defeat in November 1918 made the plan impossible and Finland instead became a republic, with Kaarlo Juho Ståhlberg elected as its first President in 1919. Despite the bitter civil war, and repeated threats from fascist movements, Finland became and remained a capitalist democracy under the rule of law. By contrast, nearby Estonia, in similar circumstances but without a civil war, started as a democracy and was turned into a dictatorship in 1934. Large scale agrarian reform in the 1920s involved breaking up the large estates controlled by the old nobility and selling the land to ambitious peasants. The farmers became strong supporters of the government. The new republic faced a dispute over the Åland Islands, which were overwhelmingly Swedish-speaking and sought retrocession to Sweden. However, as Finland was not willing to cede the islands, they were offered an autonomous status. Nevertheless, the residents did not approve the offer, and the dispute over the islands was submitted to the League of Nations. The League decided that Finland should retain sovereignty over the Åland Islands, but they should be made an autonomous province. Thus Finland was under an obligation to ensure the residents of the Åland Islands a right to maintain the Swedish language, as well as their own culture and local traditions. At the same time, an international treaty was concluded on the neutral status of Åland, under which it was prohibited to place military headquarters or forces on the islands. Alcohol abuse had a long history, especially regarding binge drinking and public intoxication, which became a crime in 1733. In the 19th century the punishments became stiffer and stiffer, but the problem persisted. A strong abstinence movement emerged that cut consumption in half from the 1880s to the 1910s, and gave Finland the lowest drinking rate in Europe. Four attempts at instituting prohibition of alcohol during the Grand Duchy period were rejected by the czar; with the czar gone Finland enacted prohibition in 1919. Smuggling emerged and enforcement was slipshod. Criminal convictions for drunkenness went up by 500%, and violence and crime rates soared. Public opinion turned against the law, and a national plebiscite went 70% for repeal, so prohibition was ended in early 1932. Nationalist sentiment remaining from the Civil War developed into the proto-Fascist Lapua Movement in 1929. Initially the movement gained widespread support among anti-Communist Finns, but following a failed coup attempt in 1932 it was banned and its leaders imprisoned. Relations with Soviet Union In the wake of the Civil War there were many incidents along the border between Finland and Soviet Russia, such as the Aunus expedition and the Pork mutiny. Relations with the Soviets were improved after the Treaty of Tartu in 1920, in which Finland gained Petsamo, but gave up its claims on East Karelia. Tens of thousands of radical Finns—from Finland, the United States and Canada—took up Stalin's 1923 appeal to create a new Soviet society in the Karelian Autonomous Soviet Socialist Republic (KASSR), a part of Russia. Most were executed in the purges of the 1930s. The Soviet Union started to tighten its policy against Finland in the 1930s, limiting the navigation of Finnish merchant ships between Lake Ladoga and the Gulf of Finland and blocking it totally in 1937. Finland in the Second World War During the Second World War, Finland fought two wars against the Soviet Union: the Winter War of 1939–1940, resulting in the loss of Finnish Karelia, and the Continuation War of 1941–1944 (with considerable support from Nazi Germany resulting in a swift invasion of neighboring areas of the Soviet Union), eventually leading to the loss of Finland's only ice-free winter harbour Petsamo. The Continuation War was, in accordance with the armistice conditions, immediately followed by the Lapland War of 1944–1945, when Finland fought the Germans to force them to withdraw from northern Finland back into Norway (then under German occupation). Finland was not occupied; its army of over 600,000 soldiers, saw only 3,500 prisoners-of-war. About 96,000 Finns lost their lives, or 2.5% of a population of 3.8 million; civilian casualties were under 2,500. In August 1939 Nazi-Germany and the Soviet Union signed the Molotov-Ribbentrop Pact, where Finland and the Baltic states were given to the Soviet "sphere of influence". After the Invasion of Poland, the Soviet Union sent ultimatums to the Baltic countries, where it demanded military bases on their soil. The Baltic states accepted Soviet demands, and lost their independence in the summer of 1940. In October 1939, the Soviet Union sent the same kind of request to Finland, but the Finns refused to give any land areas or military bases for the usage of the Red Army. This caused the Soviet Union to start a military invasion against Finland on 30 November 1939. Soviet leaders predicted that Finland would be conquered in a couple of weeks. However, even though the Red Army had huge superiority in men, tanks, guns and airplanes, the Finns were able to defend their country about 3.5 months and still avoid invasion successfully. The Winter War ended on 13 March 1940 with the Moscow peace treaty. Finland lost the Karelian Isthmus to the Soviet Union after the war. The Winter War was a big loss of prestige for Soviet Union, and it was expelled from the League of Nations because of the illegal attack. Finland received lots of international goodwill and material help from many countries during the war. After the Winter War the Finnish army was exhausted, and needed recovery and support as soon as possible. The British declined to help but in autumn 1940 Nazi Germany offered weapon deals to Finland, if the Finnish government would allow German troops to travel through Finland to occupied Norway. Finland accepted, weapon deals were made and military co-operation began in December 1940. Finland's support from, and coordination with, Nazi Germany starting during the winter of 1940–41 and made other countries considerably less sympathetic to the Finnish cause; particularly since the Continuation War led to a Finnish invasion of the Soviet Union designed not only to recover lost territory, but additionally to answer the irredentist sentiment of a Greater Finland by incorporating East Karelia, whose inhabitants were culturally related to the Finnish people, although Eastern Orthodox by religion. This invasion had caused Britain to declare war on Finland on 6 December 1941. Finland managed to defend its democracy, contrary to most other countries within the Soviet sphere of influence, and suffered comparably limited losses in terms of civilian lives and property. It was, however, punished harsher than other German co-belligerents and allies, having to pay large reparations and resettle an eighth of its population after having lost an eighth of the territory including one of its industrial heartlands and the second-largest city of Viipuri. After the war, the Soviet government settled these gained territories with people from many different regions of the USSR, for instance from Ukraine. The Finnish government did not participate in the systematic killing of Jews, although the country remained a "co-belligrent", a de facto ally of Germany until 1944. In total, eight German Jewish refugees were handed over to the German authorities. In the Tehran Conference of 1942, the leaders of the Allies agreed that Finland was fighting a separate war against the Soviet Union, and that in no way was it hostile to the Western allies. The Soviet Union was the only Allied country which Finland had conducted military operations against. Unlike any of the Axis nations, Finland was a parliamentary democracy throughout the 1939–1945 period. The commander of Finnish armed forces during the Winter War and the Continuation War, Carl Gustaf Emil Mannerheim, became the President of Finland after the war. Finland made a separate peace contract with the Soviet Union on 19 September 1944, and was the only bordering country of USSR in Europe that kept its independence after the war. During and in between the wars, approximately 80,000 Finnish war-children were evacuated abroad: 5% went to Norway, 10% to Denmark, and the rest to Sweden. Most of the children were sent back by 1948, but 15–20% remained abroad. The Moscow Armistice was signed between Finland on one side and the Soviet Union and Britain on the other side on September 19, 1944, ending the Continuation War. The armistice compelled Finland to drive German troops from its territory, leading to the Lapland War 1944–1945. In 1947, Finland reluctantly declined Marshall aid in order to preserve good relations with the Soviets, ensuring Finnish autonomy. Nevertheless, the United States shipped secret development aid and financial aid to the non-communist SDP. Establishing trade with the Western powers, such as Britain, and the reparations to the Soviet Union caused Finland to transform itself from a primarily agrarian economy to an industrialised one. After the reparations had been paid off, Finland continued to trade with the Soviet Union in the framework of bilateral trade. Finland's role in the Second World War was in many ways strange. Firstly the Soviet Union tried to invade Finland in 1939–1940. However, even with massive superiority in military strength, the Soviet Union was unable to conquer Finland. In late 1940, German-Finnish co-operation began; it took a form that was unique when compared to relations with the Axis. Finland signed the Anti-Comintern Pact, which made Finland an ally with Germany in the war against the Soviet Union. But, unlike all other Axis states, Finland never signed the Tripartite Pact and so Finland never was de jure an Axis nation. Although Finland lost two wars with the Soviets, the memory of the war was sharply inscribed in the national consciousness. It is celebrated as a victory for the national spirit by surviving against such long odds. Not just the fallen soldiers and the veterans, but many others are commemorated, including the orphans, evacuees from Karelia, the children who were evacuated to Sweden, women who worked at home or in factories, and the veterans of the women’s defence unit Lotta Svärd. The losses of civilian people during the Partisan War were silent during the wars and long after wars. At last at the demands of victimes of Partisan War themselves the Parlament accepted the compensations bills for the victimes. ref Tyyne Martikainen "Partisaanisodan siviiliuhrit", 2002. Neutrality in Cold War Finland retained a democratic constitution and free economy during the Cold War era. Treaties signed in 1947 and 1948 with the Soviet Union included obligations and restraints on Finland, as well as territorial concessions. Both treaties have been abrogated by Finland since the 1991 dissolution of the Soviet Union, while leaving the borders untouched. Even though being a neighbor to the Soviet Union sometimes resulted in overcautious concern in foreign policy ("Finlandization"), Finland developed closer co-operation with the other Nordic countries and declared itself neutral in superpower politics. In 1952, Finland and the countries of the Nordic Council entered into a passport union, allowing their citizens to cross borders without passports and soon also to apply for jobs and claim social security benefits in the other countries. Many from Finland used this opportunity to secure better-paying jobs in Sweden in the 1950s and 1960s, dominating Sweden's first wave of post-war labour immigrants. Although Finnish wages and standard of living could not compete with wealthy Sweden until the 1970s, the Finnish economy rose remarkably from the ashes of World War II, resulting in the buildup of another Nordic-style welfare state. Despite the passport union with Sweden, Norway, Denmark, and Iceland, Finland could not join the Nordic Council until 1955 because of Soviet fears that Finland might become too close to the West. At that time the Soviet Union saw the Nordic Council as part of NATO of which Denmark, Norway and Iceland were members. That same year Finland joined the United Nations, though it had already been associated with a number of UN specialized organisations. The first Finnish ambassador to the UN was G.A. Gripenberg (1956–1959), followed by Ralph Enckell (1959–1965), Max Jakobson (1965–1972), Aarno Karhilo (1972–1977), Ilkka Pastinen (1977–1983), Keijo Korhonen (1983–1988), Klaus Törnudd (1988–1991), Wilhelm Breitenstein (1991–1998) and Marjatta Rasi (1998–2005). In 1972 Max Jakobson was a candidate for Secretary-General of the UN. In another remarkable event of 1955, the Soviet Union decided to return the Porkkala peninsula to Finland, which had been rented to the Soviet Union in 1948 for 50 years as a military base, a situation which somewhat endangered Finnish sovereignty and neutrality. Finland became an associate member of the European Free Trade Association in 1961 and a full member in 1986. A trade agreement with the EEC was complemented by another with the Soviet Bloc. The first Conference for Security and Co-operation in Europe (CSCE), which lead to the creation of the OSCE, was held in Finland in 1972–1973. The CSCE was widely considered in Finland as a possible means of reducing tensions of the Cold War, and a personal triumph for President Urho Kekkonen. Officially claiming to be neutral, Finland lay in the grey zone between the Western countries and the Soviet Union. The "YYA Treaty" (Finno-Soviet Pact of Friendship, Cooperation, and Mutual Assistance) gave the Soviet Union some leverage in Finnish domestic politics. However, Finland maintained capitalism unlike most other countries bordering the Soviet Union. Property rights were strong. While nationalization committees were set up in France and UK, Finland avoided nationalizations. After failed experiments with protectionism in the 1950s, Finland eased restrictions and made a free trade agreement with the European Community in 1973, making its markets more competitive. Local education markets expanded and an increasing number of Finns also went abroad to study in the United States or Western Europe, bringing back advanced skills. There was a quite common, but pragmatic-minded, credit and investment cooperation by state and corporations, though it was considered with suspicion. Support for capitalism was widespread. Savings rate hovered among the world's highest, at around 8% until the 1980s. In the beginning of the 1970s, Finland's GDP per capita reached the level of Japan and the UK. Finland's economic development shared many aspects with export-led Asian countries. Society and the welfare state Before 1940 Finland was a poor rural nation of urban and rural workers and independent farmers. There was a small middle class, employed chiefly as civil servants and in small local businesses. As late as 1950 half of the workers were in agriculture and only a third lived in urban towns. The new jobs in manufacturing, services and trade quickly attracted people to the towns and cities. The average number of births per woman declined from baby boom a peak of 3.5 in 1947 to 1.5 in 1973. When baby boomers entered the workforce, the economy did not generate jobs fast enough and hundreds of thousands emigrated to the more industrialized Sweden, migration peaking in 1969 and 1970 (today 4.7 percent of Swedes speak Finnish). By the 1990s, farm laborers had nearly all moved on, leaving owners of small farms. By 2000 the social structure included a politically active working class, a primarily clerical middle class, and an upper bracket consisting of managers, entrepreneurs, and professionals. The social boundaries between these groups were not distinct. Causes of change included the growth of a mass culture, international standards, social mobility, and acceptance of democracy and equality as typified by the welfare state. The generous system of welfare benefits emerged from a long process of debate, negotiations and maneuvers between efficiency-oriented modernizers on the one hand and Social Democrats and labor unions. A compulsory system provides old-age and disability insurance, financed mostly by taxes on employers. The national government provides unemployment insurance, maternity benefits, family allowances, and day-care centers. Health insurance covers most of the cost of outpatient care. The national health act of 1972 provided for the establishment of free health centers in every municipality. There were major cutbacks in the early 1990s, but they were distributed to minimize the harm to the vast majority of voters. The post-war period was a time of rapid economic growth and increasing social and political stability for Finland. The five decades after the Second World War saw Finland turn from a war-ravaged agrarian society into one of the most technologically advanced countries in the world, with a sophisticated market economy and high standard of living. In 1991 Finland fell into a depression caused by a combination of economic overheating, fixed currency, depressed Western, Soviet, and local markets. Stock market and housing prices declined by 50%. The growth in the 1980s was based on debt and defaults started rolling in. GDP declined by 15% and unemployment increased from a virtual full employment to one fifth of the workforce. The crisis was amplified by trade unions' initial opposition to any reforms. Politicians struggled to cut spending and the public debt doubled to around 60% of GDP. Some 7–8% of GDP was needed to bail out failing banks and force banking sector consolidation. After devaluations the depression bottomed out in 1993. The GDP growth rate has since been one of the highest of OECD countries and Finland has topped many indicators of national performance. Until 1991, President Mauno Koivisto and two of the three major parties, Center Party and the Social Democrats opposed the idea of European Union membership and preferred entering into the European Economic Area treaty. However, after Sweden had submitted its membership application in 1991 and the Soviet Union was dissolved at the end of the year, Finland submitted its own application to the EU in March 1992. The accession process was marked by heavy public debate, where the differences of opinion did not follow party lines. Officially, all three major parties were supporting the Union membership, but members of all parties participated in the campaign against the membership. Before the parliamentary decision to join the EU, a consultative referendum was held on April 16, 1994 in which 56.9% of the votes were in favour of joining. The process of accession was completed on January 1, 1995, when Finland joined the European Union along with Austria and Sweden. Leading Finland into the EU is held as the main achievement of the Centrist-Conservative government of Esko Aho then in power. In the economic policy, the EU membership brought with it many large changes. While politicians were previously involved in setting interest rates, the central bank was given an inflation-targeting mandate until Finland joined the eurozone. During Prime Minister Paavo Lipponen's two successive governments 1995–2003, several large state companies were privatized fully or partially. Matti Vanhanen's two cabinets followed suit until autumn 2008, when the state became a major shareholder in the Finnish telecom company Elisa with the intention to secure the Finnish ownership of a strategically important industry. In addition to fast integration with the European Union, safety against Russian leverage has been increased by building fully NATO-compatible military. 1000 troops (a high per-capita amount) are simultaneously committed in NATO and UN operations. Finland has also opposed energy projects that increase dependency on Russian imports. At the same time, Finland remains one of the last non-NATO members in Europe and there seems to be not enough support for full membership unless Sweden joins first. The population is aging with the birth rate at 10.42 births/1,000 population or fertility rate at 1.8. With median age at 41.6 years Finland is one of the countries with the highest average age of its citizens. - Finland under Swedish rule - Diplomatic history of World War II#Finland - Finnish people - History of Sweden - History of Russia - History of Estonia - History of the European Union - List of Finnish wars - Early Finnish wars - List of Finnish treaties - List of Presidents of Finland - List of Prime Ministers of Finland - Military history of Finland - Politics of Finland - Monarchy of Finland - "Article on Helsingin Sanomat in English". Hs.fi. Retrieved 2011-12-06. - "Population Development on the Prehistoric Period ('Väestön kehitys esihistoriallisella ajalla')". Finnish National Board of Antiquities. Retrieved April 29, 2012. - Pollard, Tony; Banks, Iain (2006). War and Sacrifice: Studies in the Archaeology of Conflict. BRILL. p. 189. ISBN 9047418921. - "National Archives Service, Finland (in English)". Retrieved 2007-01-22. - Article by professor of History of Religions Juha Pentikäinen at Virtual Finland - The Finno-Ugric republics and the ... – Rein Taagepera – Google Books. Books.google.com. Retrieved 2011-12-06. - The common acceptance of the theory is indicated by the fact that this is the theory currently presented by the National Board of Antiquities in Finland, and several schools: E.g. Tietoa Suomen esihistoriasta. Museovirasto. Retrieved 2008-03-20. (Finnish), SUOMEN ASUTUS- JA SIIRTOLAISUUSHISTORIA- PROJEKTI COMENIUS MIGRATION PROJEKTI. City of Helsinki. Retrieved 2008-03-20. (Finnish). - Ante Aikio 2006: On Germanic-Saami contacts and Saami prehistory. – Journal de la Société Finno-Ougrienne 91: 9–55. - Petri Kallio 2006: Suomalais-ugrilaisen kantakielen absoluuttisesta kronologiasta. – Virittäjä 2006. - Jaakko Häkkinen 2009: Kantauralin ajoitus ja paikannus: perustelut puntarissa - Journal de la Société Finno-Ougrienne 2009;92:9-56. - Saarikivi, Janne & Grünthal, Riho 2005: Itämerensuomalaisten kielten uralilainen tausta. – Johanna Vaattovaara, Toni Suutari, Hanna Lappalainen & Riho Grünthal (toim.),Muuttuva muoto: Kirjoituksia Tapani Lehtisen 60-vuotispäivän kunniaksi. Kieli 16. Helsinki: Helsingin yliopiston suomen kielen laitos. 111–146. - PEOPLE, MATERIAL CULTURE AND ENVIRONMENT IN THE NORTH Proceedings of the 22nd Nordic Archaeological Conference, University of Oulu, 18–23 August 2004 Edited by Vesa-Pekka Herva GUMMERUS KIRJAPAINO - Turun Sanomat[dead link] - Archived January 20, 2012, at the Wayback Machine. - [Viklund, K. & Gullberg, K. (red.): Från romartid until vikingatid. Pörnullbacken – en järnålderstida bosättning i Österbotten, Vasa, Scriptum, 2002, 264 pages. Series: Acta antiqua Ostrobotniensia, ISSN 0783-6678 ; nr 5, and Studia archaeologica universitatis Umensis, ISSN 1100-7028; nr 15, ISBN 951-8902-91-7.] - Forsius, A. Puujalka ja jalkapuu. Cited 2006-12-14. In Finnish - Jyväskylän yliopiston kirjasto. Kielletyt kirjat. Cited 2006-12-14. In Finnish - Suomen historia: kirkon historia Cited 2006-12-14. In Finnish - Jutikkala, Eino and Pirinen, Kauko. A History of Finland. Dorset Press, 1988, p. 108 - Antti Kujala, "The Breakdown of a Society: Finland in the Great Northern War 1700-1714," Scandinavian Journal of History, Mar-June 2000, Vol. 25 Issue 1/2, pp. 69-86 - William K. Carr et al., Area Handbook for Finland (U.S. State Department, 1974) p. 10. - Kimmo Katajala, "Okänd bonde" ['The unknown peasant. The manifold faces of the peasantry from the Middle Ages to modern times'] Historisk Tidskrift, 2006, Issue 4, pp. 791–801 - Antti Kujala, The Crown, the Nobility and the Peasants 1630–1713: Tax, Rent and Relations of Power (Helsinki: Suomalaisen kirjallisuuden seura, 2003) - J. Westerholm, Populating Finland, Fennia vol. 180: 1–2 (2002), p. 145 - B.R. Mitchell, European Historical Statistics, 1750–1970 (Columbia U.P., 1978) p. 4 - See Finland People: 1990 - Martti Häikiö, Nokia: the inside story (2002) p. 35 - Michael C. Coleman, "'You Might All Be Speaking Swedish Today': language change in 19th-century Finland and Ireland," Scandinavian Journal of History, March 2010, Vol. 35 Issue 1, pp. 44–64, - Jason Lavery, The history of Finland (2006) pp. 58-60 - Mikko Juva, "Nationalism, Liberalism Och Demokrati Under Språkstridens Första Skede i Finlan" ['Nationalism, liberalism and democracy during the first period of the language conflict in Finland'] Historisk Tidskrift, 1961, Issue 4, pp. 357–368 - Osmo Jussila, "Nationalism and Revolution: Political Dividing Lines in the Grand Duchy of Finland during the Last Years of Russian Rule," Scandinavian Journal of History (1977) 2#4 pp 289-309. - Günther Gassmann et al. Historical dictionary of Lutheranism (2001) p. 296 - Ruth-Esther Hillila, and Barbara Blanchard, Historical Dictionary of the Music and Musicians of Finland (1997) - Denby Richards, "Music in Finland," American-Scandinavian Review, 1968, Vol. 56 Issue 3, pp. 238–243 - Glenda Dawn Goss, "A Backdrop for Young Sibelius: The Intellectual Genesis of the Kullervo Symphony," 19th Century Music, Summer 2003, Vol. 27 Issue 1, pp. 48–73 - Pirjo Markkola, "Promoting Faith and Welfare: The Deaconess Movement in Finland and Sweden, 1850–1930," Scandinavian Journal of History, Mar–June 2000, Vol. 25 Issue 1/2, pp. 101–118 - Edward C. Thaden, Russification in the Baltic Provinces and Finland (1981) - Steven Huxley, Constitutionalist insurgency in Finland: Finnish "passive resistance" against Russification as a case of nonmilitary struggle in the European resistance tradition (1990) - Tuomo Polvinen, Imperial Borderland: Bobrikov and the Attempted Russification of Finland, 1898–1904 (1995) - B.R. Mitchell, European Historical Statistics, 1750–1970 (Columbia U.P., 1978) p. 47 - Charlotta Hedberg and Kaisa Kepsu, "Migration as a mode of cultural expression? The case of the Finland‐Swedish minority's migration to Sweden." Geografiska Annaler: Series B, Human Geography (2003) 85#2: 67-84 online - Pekka Kalevi Hamalainen, In Time of Storm: Revolution, Civil War and the Ethnolinguistic Issue in Finland (HIA Book Collection, 1979) - Alan Siaroff, "Democratic Breakdown and Democratic Stability: A Comparison of Interwar Estonia and Finland," Canadian Journal of Political Science Vol. 32,No. 1 (March , 1999), pp. 103-124 in JSTOR - Hans Jörgensen, "The Inter-War Land Reforms in Estonia, Finland and Bulgaria: A Comparative Study," Scandinavian Economic History Review, April 2006, Vol. 54 Issue 1, pp. 64–97 - John H. Wuorinen, "Finland's Prohibition Experiment," Annals of the American Academy of Political and Social Science vol. 163, (September , 1932), pp. 216–226 in JSTOR - S. Sariola, "Prohibition in Finland, 1919-1932; its background and consequences," Quarterly Journal of Studies in Alcohol (September 1954) 15(3) pp. 477–90 - James S. Olson, Lee Brigance Pappas and Nicholas C.J. Pappas, An Ethnohistorical Dictionary of the Russian and Soviet Empires (1994) p. 350 - Henrik O. Lunde, Finland's War of Choice: The Troubled German-Finnish Alliance in World War II (2011) - Kent Forster, "Finland's Foreign Policy 1940–1941: An Ongoing Historiographic Controversy," Scandinavian Studies (1979) 51#2 pp. 109–123 - Ville Kivimäki, "Between Defeat and Victory: Finnish memory culture of the Second World War," Scandinavian Journal of History (2012) 37#4 p 482-84 - Eloise Engle, The Winter War: The Soviet Attack on Finland 1939–1940 (1992) - Lunde, Finland's War of Choice: The Troubled German-Finnish Alliance in World War II (2011) - Jakobson, Max. Finland in the New Europe. Westport, CT: Praeger Publishers, 2009. p. 54 - Hidden help from across the Atlantic, Helsingin Sanomat - Ville Kivimäki, "Between Defeat and Victory: Finnish memory culture of the Second World War," Scandinavian Journal of History (2012) 37#4 pp 482-504 online - Growth and Equity in Finland, World Bank - Finland 1917–2007. "Population development in independent Finland – greying Baby Boomers". Stat.fi. Retrieved 2011-12-06. - Pertti Haapala, and Brian Fleming, "The Fate of the Welfare State," Historiallinen Aikakauskirja,' 1998, Vol. 96 Issue 2, pp. 142–149 - Pauli Kettunen, "The Nordic Welfare State in Finland," Scandinavian Journal of History, September 2001, Vol. 26 Issue 3, pp. 225–247 - Mikko Mattila, and Petri Uusikyla, "The politics of scarcity: Social welfare and health care cutbacks in Finland, 1991–1995," West European Politics, October 1997, Vol. 20 Issue 4, pp. 146–63 - "Inflation targeting: Reflection from the Finnish experience" (PDF). Retrieved 2011-12-06. - "Converted". 18.104.22.168. 1996-05-14. Retrieved 2011-12-06. - Raunio, Tapio; Tiilikainen, Teija (2003). Finland in the European Union. Taylor & Francis. p. 37. ISBN 9780203485019. - Köthenbürger, Marko; Sinn, Hans-Verner; Whalley, John (2006). Privatization Experiences in the European Union. pp. 141–162. ISBN 9780262112963. - "Nato: Address by Mr Pertti Torstila, Secretary of State, to the Macedonian Diplomatic Bulletin". Formin.finland.fi. Retrieved 2011-12-06. - Finland and NATO, Tomas Ries. - "Median Age (Years) –". Globalhealthfacts.org. Archived from the original on April 8, 2011. Retrieved 2011-12-06. - Ahola, Joonas & Frog with Clive Tolley (toim.). (2014). Fibula, Fabula, Fact – The Viking Age in Finland Studia Fennica Historica 18. Helsinki: Finnish Literature Society. - Alapuro, Risto (March 1979). "Nineteenth century nationalism in Finland: a comparative perspective". Scandinavian Political Studies. Wiley. 2 (1): 19–29. doi:10.1111/j.1467-9477.1979.tb00203.x. Full text. - Frederiksen, Niels Christian (1902). Finland; its public and private economy. online edition - Graham Jr., Malbone W. (1927). New Governments of Eastern Europe. pp. 169–245 online edition, on 1917-1926 - Jussila, Hentilä, Nevakivi (1999). From Grand Duchy to a Modern State: A Political History of Finland Since 1809. Hurst & Co. - Jutikkala, Eino; Pirinen, Kauko (1984). A History of Finland (4th ed.). - Kallio, Veikko (1994). Finland: A Cultural History. Helsinki: WSOY. - Kirkby, David (2006). A concise history of Finland. Cambridge University Press. ISBN 9780521539890. Excerpt and text search. - Kinnunen, Tiina; Kivimäki, Ville (2011). Finland in World War II: History, Memory, Interpretations. - Kissane, Bill (June 2000). "Nineteenth‐century nationalism in Finland and Ireland: a comparative analysis". Nationalism & Ethnic Politics. Taylor and Francis. 6 (2): 25–42. doi:10.1080/13537110008428594. (Covers 1820 to 1910.) - Lavery, Jason (2006). The History of Finland. The Greenwood Histories of the Modern Nations Series. Excerpt and text search. - Lewis, Richard D. (2004). Finland: Cultural Lone Wolf. Cultural interpretation of recent history. Excerpt and text search. - Lofgren, O. (1980). "Historical Perspectives on Scandinavian Peasantries". Annual Review of Anthropology. 9: 187–215. doi:10.1146/annurev.an.09.100180.001155. - Meinander, Henrik (2011). A History of Finland. Columbia University Press. 227 pages; focus is since 1900. - Nissen, Henrik S. (1983). Scandinavia During the Second World War. - Paasivirta, Juhani (1981). Finland and Europe: The Period of Autonomy and the International Crises, 1808–1914. University of Minnesota Press. - Pesonen, Pertti; Riihinen, Olavi (2004). Dynamic Finland: The Political System and the Welfare State. History since 1970. - Puntila, Lauri Aadolf (1974). The political history of Finland 1809-1966. Short popular history. - Raunio, Tapio; Tiilikainen, Teija (2003). Finland in the European Union. F. Cass. Online edition. - Rislakki, Jukka (January 2015). "'Without Mercy': U.S. Strategic Intelligence and Finland in the Cold War". Journal of Military History. 79 (1): 127–149. - Schoolfield, George C., ed. (1998). A History of Finland's Literature. University of Nebraska Press. Online edition. - Singleton, Frederick (1998). A Short History of Finland. Excerpt and text search. - Upton, Anthony E. (1980). The Finnish Revolution, 1917–1918. University of Minnesota Press. - Wuorinen, John H. (1948). Finland and World War II, 1939–1944. Online edition. - Wuorinen, John Henry (1931). Nationalism in modern Finland. Columbia University Press. |Wikimedia Commons has media related to History of Finland.| - Finnish historical documents at WikiSource (Finnish) - History of Finland: A selection of events and documents by Pauli Kruhse - History of Finland: Primary Documents - Diplomatarium Fennicum – Publishing of medieval documents (the National Archives of Finland) - ProKarelias collection of international treaties concerning independent Finland (Finnish) - Historical Atlas of Finland - Vintage Finland – slideshow by Life magazine
In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible (a one-to-one map) at each point. This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved. Well-known examples of curvilinear coordinate systems in three-dimensional Euclidean space (R3) are cylindrical and spherical coordinates. A Cartesian coordinate surface in this space is a coordinate plane; for example z = 0 defines the x-y plane. In the same space, the coordinate surface r = 1 in spherical coordinates is the surface of a unit sphere, which is curved. The formalism of curvilinear coordinates provides a unified and general description of the standard coordinate systems. Curvilinear coordinates are often used to define the location or distribution of physical quantities which may be, for example, scalars, vectors, or tensors. Mathematical expressions involving these quantities in vector calculus and tensor analysis (such as the gradient, divergence, curl, and Laplacian) can be transformed from one coordinate system to another, according to transformation rules for scalars, vectors, and tensors. Such expressions then become valid for any curvilinear coordinate system. A curvilinear coordinate system may be simpler to use than the Cartesian coordinate system for some applications. The motion of particles under the influence of central forces is usually easier to solve in spherical coordinates than in Cartesian coordinates; this is true of many physical problems with spherical symmetry defined in R3. Equations with boundary conditions that follow coordinate surfaces for a particular curvilinear coordinate system may be easier to solve in that system. While one might describe the motion of a particle in a rectangular box using Cartesian coordinates, it's easier to describe the motion in a sphere with spherical coordinates. Spherical coordinates are the most common curvilinear coordinate systems and are used in Earth sciences, cartography, quantum mechanics, relativity, and engineering.
A sample survey is characterized by:- - clearly specified population - sample selected by a random process from that population - goal of estimating some population parameters In the sample survey, randomization is used to reduce bias and to allow the results of the sample to be generalized to the population from which the sample was drawn. Element: An element is an object on which a measurement is made. This could be a voter in a precinct, a product as it comes off the assembly line, or a plant in a field that has either bloomed or not. Population: A population is a collection of elements about which we wish to make an inference. The population must be clearly defined before the sample is taken. Sampling Units: Sampling units are non-overlapping collections of elements from the population that cover the entire population. The sampling units partition the population of interest. The sampling units could be households or individual voters. Frame: A frame is a list of sampling units. Sample: A sample is a collection of sampling units drawn from a frame or frames. Data are obtained from the sample and are used to describe characteristics of the population. Example 1 Suppose we are interested in what students in a particular high school think about the drilling for oil in our national wildlife preserves. The elements are the high school students and the population is the students who attend this high school. The sampling units could be the students as individuals with the frame as alphabetical listing of all students enrolled in the school. The sampling units could be homerooms, since each student has one and only one homeroom, and the frame the class list for homerooms. Example 2 Suppose we are interested in what voters in a particular precinct think about the drilling for oil in our national wildlife preserves. The elements are the registered voters in the precinct. The population is the collection of registered voters. The sampling units will likely be households in which there may be several registered voters. The frame is a list of households in the precinct. When the population is the residents of a city, the frame will commonly be the city phone book. However, not everyone in the city has their phone listed in the phone book. In this situation, the frame does not match the population. A survey conducted from the frame of the phone book would likely suffer from under coverage bias. Sample designs that utilize planned randomness are called probability samples. Simple random sample: The most fundamental probability sample is the simple random sample. In a simple random sample, a sample of n sampling units is selected in such a way that each sample of size n has the same chance of being selected. Stratified Random Sample: A stratified random sample is one obtained be separating the population elements into non-overlapping groups, called strata, and then selecting a simple random sample from each stratum. Systematic Sample: A systematic sample is obtained by randomly selecting at random one element from the first k elements in the frame and every kth element thereafter. Cluster Sample: A cluster sample is a probability sample in which each sampling unit is a collection, or cluster, of elements. Sources of Errors in Surveys Sampling ErrorSampling error is a part of any sampling process. If the sampling process were repeated a number of times, the results would differ each time, producing a variation in the estimates of the population parameters. Coverage error results when the frame does not match the population. For example, if the frame is the town phone book, then people with unlisted numbers and those without phones will be missing from the frame. Non-response error is a result of elements in the frame that have died, moved away, refuse to participate, or otherwise are missing from the sample. Observation Error include interviewer error, respondent error, measurement error, and errors in data collection. - Interviewer error is a result of the interaction between the interviewer and the subject being interviewed. Most people who agree to an interview do not want to appear disagreeable and will tend to side with the view apparently favoured by the interviewer, especially on questions for which the respondent does not have a strong opinion. Reading a question with inappropriate emphasis or intonation can a response in one direction or another. Interviewers of the same gender, racial, and ethnic groups as those being interviewed are, in general, slightly more successful. - Respondent error is a result of the differing abilities of the respondents in a sample to answer correctly the questions asked. Most respondent errors are unintentional and are due to either recall bias (the respondent does not remember correctly) or prestige bias (the respondent exaggerates). At times, respondent error may be due to intentional deception (the respondent will not admit breaking a law or has a particular gripe against an agency). - Measurement error occurs when inaccurate responses are caused by errors of definition in survey questions. For example, what does the term unemployed mean? Should the unemployed include those who have given up looking for work, teenagers who cannot find summer jobs, and those who lost part-time jobs? Does education include only formal schooling or technical training, on-the-job classes and summer institutes as well? Items to be measured must be precisely defined and be unambiguously measurable. - Errors in data collection occur in all surveys. Problems with telephone survey A major problem with telephone surveys is the establishment of a frame that closely corresponds to the population. Telephone directories have many numbers that do not belong to households, and many households have unlisted numbers. A technique that avoids the problem of unlisted numbers is random digit dialling. In this method, a telephone exchange number (the first three digits of the seven-digit number) is selected, and then the last four digits are dialled randomly until a fixed number of households of a specified type are reached. A mailed questionnaire sent to a specific group of interested persons can achieve good results, but, response rates for this type of data collection are generally so low that all reported results are suspect. Nonresponse can be a problem in any form of data collection, but since we have the least contact with respondents in a mailed questionnaire, we frequently have the lowest rate of response. The low response rate can introduce a bias into the sample because the people who answer questionnaires may not be representative of the population of interest. To eliminate some of this bias, investigators frequently contact the non-respondents through follow-up letters, telephone interviews, or personal interviews. Steps in Planning a Survey 1. Statement of objectives 2. Target population 3. The frame 4. Sample design 5. Method of measurement 6. Measurement instrument 7. Selection and training of field-workers 8. The pre-test 9. Organization of fieldwork. 10. Organization of data management 11. Data analysis. 12. Final Report Section B Group 6_Rohan Kr. Jha (13FPM004) - Apurva Ramteke(13PGP068) - Chandan Parsad(13FPM002) - Komal Suchak (13PGP086) - Silpa Bahera (13PGP107) - Sushil Kumar (13FPM010) - Vivek Roy (12FPM005) - Vaneet Bhatia (13FPM008)
|This article needs additional citations for verification. (March 2008)| An orbital spaceflight (or orbital flight) is a spaceflight in which a spacecraft is placed on a trajectory where it could remain in space for at least one orbit. To do this around the Earth, it must be on a free trajectory which has an altitude at perigee (altitude at closest approach) above 100 kilometers (62 mi) (this is, by at least one convention, the boundary of space). To remain in orbit at this altitude requires an orbital speed of ~7.8 km/s. Orbital speed is slower for higher orbits, but attaining them requires higher delta-v. |Orbital human spaceflight| Orbital spaceflight from Earth has only been achieved by launch vehicles that use rocket engines for propulsion. To reach orbit, the rocket must impart to the payload a delta-v of about 9.3–10 km/s. This figure is mainly (~7.8 km/s) for horizontal acceleration needed to reach orbital speed, but allows for atmospheric drag (approximately 300 m/s with the ballistic coefficient of a 20 m long dense fueled vehicle), gravity losses (depending on burn time and details of the trajectory and launch vehicle), and gaining altitude. The main proven technique involves launching nearly vertically for a few kilometers while performing a gravity turn, and then progressively flattening the trajectory out at an altitude of 170+ km and accelerating on a horizontal trajectory (with the rocket angled upwards to fight gravity and maintain altitude) for a 5-8 minute burn until orbital velocity is achieved. Currently, 2-4 stages are needed to achieve the required delta-v. Most launches are by expendable launch systems. The Pegasus rocket for small satellites instead launches from an aircraft at an altitude of 12 km. There have been many proposed methods for achieving orbital spaceflight that have the potential of being much more affordable than rockets. Some of these ideas such as the space elevator, and rotovator, require new materials much stronger than any currently known. One tether concept which can be built with existing materials and technology is the Non-rotating Skyhook. Other proposed ideas include ground accelerators such as launch loops, rocket assisted aircraft/spaceplanes such as Reaction Engines Skylon, scramjet powered spaceplanes, and RBCC powered spaceplanes. Gun launch has been proposed for cargo. An object in orbit at an altitude of less than roughly 200 km is considered unstable due to atmospheric drag. For a satellite to be in a stable orbit (i.e. sustainable for more than a few months), 350 km is a more standard altitude for low Earth orbit. For example, on 1958-02-01 the Explorer 1 satellite was launched into an orbit with a perigee of 358 kilometers (222 mi). It remained in orbit for more than 12 years before its atmospheric reentry over the Pacific Ocean on 1970-03-31. Due to Orbital mechanics orbits are in a particular, largely fixed plane around the Earth, which coincides with the center of the Earth, and may be tilted with respect to the equator. The Earth rotates about its axis within this orbit, and the relative motion of the spacecraft and the movement of the Earths surface determines the position that the spacecraft appears in the sky from the ground, and which parts of the Earth are visible from the spacecraft. By dropping a vertical down to the Earth's surface it is possible to calculate a ground track which shows which part of the Earth a spacecraft is immediately above, and this is useful for helping to visualise the orbit. NASA provides real-time tracking of the over 500 artificial satellites maintained in orbit around Earth. For the position of these satellites see NASA satellite tracking. In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM). Deorbit and re-entry Returning spacecraft (including all potentially manned craft) have to find a way of slowing down as much as possible while still in higher atmospheric layers and avoid hitting the ground (lithobraking) or burning up. For many orbital space flights, initial deceleration is provided by the retrofiring of the craft's rocket engines, perturbing the orbit (by lowering perigee down into the atmosphere) onto a suborbital trajectory. Many spacecraft in low-Earth orbit (e.g., nanosatellites or spacecraft that have run out of station keeping fuel or are otherwise non-functional) solve the problem of deceleration from orbital speeds through using atmospheric drag (aerobraking) provide initial deceleration. In all cases, once initial deceleration has lowered the orbital perigee into the mesosphere, all spacecraft lose most of the remaining speed, and therefore kinetic energy, through the atmospheric drag effect of aerobraking. Intentional aerobraking is achieved by orienting the returning space craft to fly so as to present the heat shields forwards towards the atmosphere so as to protect against the high temperatures generated by atmospheric compression and friction caused by passing through the atmosphere at hypersonic speeds. The thermal energy is dissipated mainly by compression heating the air in a shockwave ahead of the vehicle using a blunt heat shield shape, with the aim of minimising the heat entering the vehicle. Sub-orbital space flights, being at a much lower speed, do not generate anywhere near as much heat upon re-entry. Even if the vehicle is a satellite that is ultimately expendable, most space authorities are pushing towards controlled re-entry techniques to avoid issues of space debris reaching the ground and causing a hazard to lives and property. In addition, this minimises the creation of orbital space junk. - Sputnik 1 was successfully launched on October 4, 1957 by the Soviet Union - Vostok 1, launched by the Soviet Union on April 12, 1961, carrying Yuri Gagarin, was the first successful human spaceflight to reach Earth orbit. - List of orbits - Ground track - Orbital mechanics - Project HARP was a failed attempt, and a ram accelerator is another design, to launch an object into orbit with a gun - Rocket launch - Non-rocket spacelaunch - Spaceport, including a list of sites for orbital launches - Sarmont, E., ”Affordable to the Individual Spaceflight”, accessed Feb. 6, 2014 - "Explorer 1 - NSSDC ID: 1958-001A". NASA.
This article needs additional citations for verification . (March 2007) (Learn how and when to remove this template message) In economics, physical capital or just capital is a factor of production (or input into the process of production), consisting of machinery, buildings, computers,etc.Its is divided into 2 categories-working and fixed capital.The production function takes the general form Y=f(K, L, N), where Y is the amount of output produced, K is the amount of capital stock used, L is the amount of labor used, and N is the amount of natural resources used. In economic theory, physical capital is one of the three primary factors of production; the others are natural resources (including land), and labor —the stock of competences embodied in the labor force. Physical capital is distinct from human capital (a result of investment in the human agent), circulating capital, and financial capital. Physical capital is fixed capital, which is any kind of real physical asset that is not used up in the production of a product. Usually the value of land is not included in physical capital as it is not a reproducible product of human activities. Economics is the social science that studies the production, distribution, and consumption of goods and services. In economics, capital consists of an asset that can enhance one's power to perform economically useful work. For example, in a fundamental sense a stone or an arrow is capital for a caveman who can use it as a hunting instrument, while roads are capital for inhabitants of a city. In economics, a production function gives the technological relation between quantities of physical inputs and quantities of output of goods. The production function is one of the key concepts of mainstream neoclassical theories, used to define marginal product and to distinguish allocative efficiency, a key focus of economics. One important purpose of the production function is to address allocative efficiency in the use of factor inputs in production and the resulting distribution of income to those factors, while abstracting away from the technological problems of achieving technical efficiency, as an engineer or professional manager might understand it. |This economics-related article is a stub. You can help Wikipedia by expanding it.| In economics, factors of production, resources, or inputs are what is used in the production process to produce output—that is, finished goods and services. The utilized amounts of the various inputs determine the quantity of output according to the relationship called the production function. There are three basic resources or factors of production: land, labor, and capital. The factors are also frequently labeled "producer goods or services" to distinguish them from the goods or services purchased by consumers, which are frequently labeled "consumer goods". Labour economics seeks to understand the functioning and dynamics of the markets for wage labour. In economics and sociology, the means of production are physical and non-financial inputs used in the production of economic value. These include raw materials, facilities, machinery and tools used in the production of goods and services. In the terminology of classical economics, the means of production are the "factors of production" minus financial and human capital. Growth accounting is a procedure used in economics to measure the contribution of different factors to economic growth and to indirectly compute the rate of technological progress, measured as a residual, in an economy. Growth accounting decomposes the growth rate of an economy's total output into that which is due to increases in the contributing amount of the factors used—usually the increase in the amount of capital and labor—and that which cannot be accounted for by observable changes in factor utilization. The unexplained part of growth in GDP is then taken to represent increases in productivity or a measure of broadly defined technological progress. In economics and in particular neoclassical economics, the marginal product or marginal physical productivity of an input is the change in output resulting from employing one more unit of a particular input, assuming that the quantities of other inputs are kept constant. James Edward Meade, was a British economist and winner of the 1977 Nobel Memorial Prize in Economic Sciences jointly with the Swedish economist Bertil Ohlin for their "pathbreaking contribution to the theory of international trade and international capital movements." In economics, total-factor productivity (TFP), also called multi-factor productivity, is usually measured as the ratio of aggregate output to aggregate inputs. Under some simplifications about the production technology, growth in TFP becomes the portion of growth in output not explained by growth in traditionally measured inputs of labour and capital used in production. TFP is calculated by dividing output by the weighted average of labour and capital input, with the standard weighting of 0.7 for labour and 0.3 for capital. Total factor productivity is a measure of economic efficiency and accounts for part of the differences in cross-country per-capita income. The rate of TFP growth is calculated by subtracting growth rates of labor and capital inputs from the growth rate of output. The Solow–Swan model is an economic model of long-run economic growth set within the framework of neoclassical economics. It attempts to explain long-run economic growth by looking at capital accumulation, labor or population growth, and increases in productivity, commonly referred to as technological progress. At its core is a neoclassical (aggregate) production function, often specified to be of Cobb–Douglas type, which enables the model "to make contact with microeconomics". The model was developed independently by Robert Solow and Trevor Swan in 1956, and superseded the Keynesian Harrod–Domar model. Articles in economics journals are usually classified according to the JEL classification codes, a system originated by the Journal of Economic Literature. The JEL is published quarterly by the American Economic Association (AEA) and contains survey articles and information on recently published books and dissertations. The AEA maintains EconLit, a searchable data base of citations for articles, books, reviews, dissertations, and working papers classified by JEL codes for the years from 1969. A recent addition to EconLit is indexing of economics-journal articles from 1886 to 1968 parallel to the print series Index of Economic Articles. In economics, total cost (TC) is the total economic cost of production and is made up of variable cost, which varies according to the quantity of a good produced and includes inputs such as labour and raw materials, plus fixed cost, which is independent of the quantity of a good produced and includes inputs that cannot be varied in the short term: fixed costs such as buildings and machinery, including sunk costs if any. Since cost is measured per unit of time, it is a flow variable. In economics, distribution is the way total output, income, or wealth is distributed among individuals or among the factors of production. In general theory and the national income and product accounts, each unit of output corresponds to a unit of income. One use of national accounts is for classifying factor incomes and measuring their respective shares, as in national Income. But, where focus is on income of persons or households, adjustments to the national accounts or other data sources are frequently used. Here, interest is often on the fraction of income going to the top x percent of households, the next x percent, and so forth, and on the factors that might affect them. In economics, factor payments are the income people receive for supplying the factors of production: land, labor, capital or entrepreneurship. In economics, the marginal product of labor (MPL) is the change in output that results from employing an added unit of labor. It is a feature of the production function, and depends on the amounts of physical capital and labor already in use. The AK model of economic growth is an endogenous growth model used in the theory of economic growth, a subfield of modern macroeconomics. In the 1980s it became progressively clearer that the standard neoclassical exogenous growth models were theoretically unsatisfactory as tools to explore long run growth, as these models predicted economies without technological change and thus they would eventually converge to a steady state, with zero per capita growth. A fundamental reason for this is the diminishing return of capital; the key property of AK endogenous-growth model is the absence of diminishing returns to capital. In lieu of the diminishing returns of capital implied by the usual parameterizations of a Cobb–Douglas production function, the AK model uses a linear model where output is a linear function of capital. Its appearance in most textbooks is to introduce endogenous growth theory. In the technological theory of social production, the growth of output, measured in money units, is related to achievements in technological consumption of labour and energy. This theory is based on concepts of classical political economy and neo-classical economics and appears to be a generalisation of the known economic models, such as the neo-classical model of economic growth and input-output model. The Cambridge capital controversy, sometimes called "the capital controversy" or "the two Cambridges debate", was a dispute between proponents of two differing theoretical and mathematical positions in economics that started in the 1950s and lasted well into the 1960s. The debate concerned the nature and role of capital goods and a critique of the neoclassical vision of aggregate production and distribution. The name arises from the location of the principals involved in the controversy: the debate was largely between economists such as Joan Robinson and Piero Sraffa at the University of Cambridge in England and economists such as Paul Samuelson and Robert Solow at the Massachusetts Institute of Technology, in Cambridge, Massachusetts. Constant capital (c), is a concept created by Karl Marx and used in Marxian political economy. It refers to one of the forms of capital invested in production, which contrasts with variable capital (v). The distinction between constant and variable refers to an aspect of the economic role of factors of production in creating a new value. Econodynamics is an empirical science that studies emergences, motion and disappearance of value—a specific concept that is used for description of the processes of production and distribution of wealth. Econodynamics is based on the achievements of classical political economy and neo-classical economics and has been using the methods of phenomenological science to investigate evolution of economic system. Econodynamics has been proposing methods of analysis and forecasting of economic processes. The comprehensive review of the problems of econodynamics is given recently by Vladimir Pokrovskii.
Unveiling the mysteries of the world around us, Newton’s laws of physics articulate the fundamental principles of motion, helping us to comprehend the Universe’s inherent mechanical patterns. This article will be a fine overview, shedding light on each of Newton’s laws in detail and their extensive applications in everyday scenarios. Unveiling the Framework of Newton’s Laws of Physics Sir Isaac Newton, an eminent physicist, and mathematician, formulated the laws of physics, which remain pivotal in understanding the complex interrelation between force and motion. There are three fundamentals laws: - Newton’s First Law (The Law of Inertia) - Newton’s Second Law (The Law of Acceleration) - Newton’s Third Law (The Law of Action and Reaction) Newton’s First Law: The Law of Inertia The law of inertia, Newton’s first law, explains that an object at rest stays at rest, and an object in motion continues in a straight path, unless acted upon by an external force. This principle denotes the inherent inertia in objects that resist change in their state of motion without the influence of an external entity. Practical Application of the Law of Inertia This law shines in everyday life. When you’re in a moving car that suddenly stops, you feel a jerk forward due to your body’s inertia, trying to maintain the state of motion it was in. Also, a book resting on a table will stay at rest until someone pushes it, providing the external force to change its status. Newton’s Second Law: The Law of Acceleration Newton’s second law details the relationship between force, mass, and acceleration. It asserts that the force (F) acting on an object is equal to its mass (m) multiplied by acceleration (a), mathematically represented as F=ma. Manifestation of the Law of Acceleration in Real Life You experience Newton’s second law when you push a shopping cart. A heavier cart (more mass) requires more force to move (accelerate) than a lighter one. Thus, force and mass directly influence an object’s acceleration. Newton’s Third Law: The Law of Action and Reaction Newton’s third law states that for every action, there is an equal and opposite reaction. This principle validates the nature of forces, existing in pairs and asserting the same magnitude, but opposite directions. Examples of the Law of Action and Reaction This law allows rockets to blast off. The rocket’s engines push gases downwards (action), and in return, the gases exert an upward force (reaction) propelling the rocket upwards. In-depth Analysis of Newton’s Laws in Everyday Physics Newton’s laws form the backbone of classical mechanics, explaining a range of phenomena from the motion of heavenly bodies to the distortion of solid bodies. The laws also play an essential role in modern fields like quantum mechanics and cosmology. Newton’s Laws and Astronautic Applications Newton’s laws are fundamentally essential in spacecraft navigation. The Hohmann transfer orbit, a spacecraft maneuver to move between two orbits, is devised based on these laws. Satellites maintain constant rotation due to their inertia (Newton’s first law), while rockets overcome Earth’s gravitational force using the action-reaction principle (Newton’s third law). Newton’s Laws in Architectural Physics Architects apply Newton’s laws when designing buildings to counteract gravitational force. For instance, the base of a skyscraper is wider to distribute the building’s mass more evenly, minimizing toppling risks due to the law of inertia. In summary, Newton’s laws of physics not only enable us to understand the Universe but are critical in engineering, architectural design, space exploration, and more. These laws enhance our understanding of natural phenomena, allowing us to harness physical forces to our advantage, hence proving that physics, indeed, is the language of the Universe. - 5 Crucial Aspects of Understanding the Laws of Thermodynamics - 7 Essential Facts on Newton’s Law of Momentum: A Deeper Dive into Physics - Mastering Coulomb’s Law: A Comprehensive Guide on the Key to Understanding Electrical Interactions - In-Depth Understanding of Newton’s Laws of Motion: A Comprehensive Guide to Physics Fundamentals - 7 Essential Steps to Understanding Thermodynamic Rules: A Comprehensive Guide
The course is part of this learning path This course explores Python’s For Loop and is part of a series of content designed to help you learn to program with the Python programming language. - Explain the purpose of For Loops - How to use For Loops This course was designed for first-time developers wanting to learn Python. This is an introductory course and doesn’t require any prior programming knowledge. Hello, and welcome! My name is Ben Lambert, and I’ll be your instructor for this course. This course is part of a series of content designed to help you learn to program with the Python programming language. Should you wish to ask me a specific question, you can do that with the contact details on screen. You can also reach support by using the email address: firstname.lastname@example.org. And one of our cloud experts will reply. In this lesson we’ll explore Python’s for loop. Recall that loops are used to repeat a block of code. Python provides different types of loops which are used in different circumstances. Two of those loop types are: while loops and for loops. Both are used to repeat a block of code. While loops are used to repeat a code block based on some condition remaining True. And they stop when the condition becomes False, or the loop is broken. For loops are used to bind a name to the next object in a collection. Such as a list, set, tuple, or dictionary. Each time the loop progresses it binds a name to the next object and then we can interact with that object. Imagine that we want to send a welcome email to a list of email addresses. Pretend that we created a function that will send an email. The how in this case isn’t important. Just know that it accepts an email address as input. Using a for loop we can bind the name email_address to each email address in the list. The first time through the loop this name is bound to this first address. Then it can be passed to the send welcome email function. This function will be called once for each object in the list. This is the purpose of a for loop. To repeat a code block for some limited number of times. Let’s review the syntax used to define a for loop. Recall that we’re using for loops to bind a name to a value from a collection. For loops start with the lowercase word for and ends with a colon and a new line. After the word for is where we bind names. Then the keyword in: followed by a collection. The code inside a for loop is indented using the standard indentation. This is the basic structure of a for loop. For each object in a collection repeat some code. This example starts by binding the name letters to a list of strings containing the letters a b c and d. Then the for-loop loops through each of the letters and binds the name letter to the next object. The built-in print function is used to display each letter in the console. Since lists are ordered, when this loops through, the name letter will first be bound to a then b then c then d. And then the list will naturally stop. Recall that while loops continue to repeat while some condition is True. For loops have an implied stopping point. Which is at the end of the collection. However, there are times where we may want to stop a for loop, or move to the next object. For this we can use the break and continue keywords. In this example the loop will start and bind to the letter a. The conditional is checked and since the current bound object of a doesn’t equal b it will move to this print function and print the letter a. Next time around it will be bound to b. The condition is checked and since it’s True, this line is interpreted. The keyword break will stop the current loop entirely returning the flow to the code following the loop. Since we break out of the loop, we don’t see any other letters printed. The break keyword can be used when you know that you don’t require the loop to continue. Here’s a similar example except we use the keyword continue. When this condition is True this continue statement will be interpreted and the loop will move to the next object in the collection. When this runs it will print a c and d and skips printing b. Looping through a collection is a common task for developers. - For each user in some database perform some task. - For each invoice perform some calculation. - For each email in the inbox mark as read. Sometimes you’ll find that you need to repeat a block of code for some number of times. As opposed to each object in a collection. Python provides a built-in function named range which enables us to repeat a for loop some number of times. The built-in range function accepts arguments to control the start and stop numbers. Each time this loop advances it will bind to an integer. There’s another common use case with for loops which is the need to loop through a list and to know which loop index index we’re on. Python provides a built-in function named enumerate that can accept a collection as input. This function is a bit interesting because conceptually it returns a collection of tuples. Recall that Python’s language syntax includes a feature called tuple unpacking. Tuple unpacking enables us to bind multiple names to corresponding objects inside a tuple. Unpacking also works with the name binding of for loops. Using this with the built-in enumerate function enables us to bind one name to the loop’s counter and another to the value. This example will print out the index and its corresponding value. These built-in functions have they’re use cases. However, those are specific to the problems being solved by the code. So don’t worry if you can’t imagine use cases. As you begin reading other people’s code, you’ll see more real world examples. Okay, this seems like a natural stopping point. Here are your key takeaways for this lesson: - The purpose of a for loop is to repeat a code block for some limited amount of times. For example: for each object in a list. - For loops are defined with the for and in keywords. Where the first portion of the definition specifies the name bindings the second portion specifies the collection. - For loops will naturally stop on their own when all objects have been looped through. They can also be stopped using the break keyword. - The built-in range function can be used to loop for a specific number of times. - The built-in enumerate function can be used to loop through a collection producing the value and the loop index. That's all for this lesson. Thanks so much for watching. And I’ll see you in another lesson! Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.
Interior and Exterior Angles Worksheets What Are Interior & Exterior Angles? Sometimes all you need to put the pieces together is just a little knowledge of basic mathematics angles. Interior and exterior angles are one of the basic angles that students need to know to develop their knowledge of algebra. An angle that is inside a shape is known as an interior angle. Since, triangles have three sides, there can only be three interior angels. It is important to note that the sum of three interior angles a + b + c is always equal to 180 degrees. Let's say a triangle has given two values with the third one unknown; a = 40 degrees, b = 60 degrees, and c =? You can easily find the value by forming the equation a + b + c = 180; 40 + 60 + c =180. On the other hand, an exterior angle lies outside of any shape or an angle that if formed between two shape. It is an angle formed any side of the shape. They are called the supplementary angles, which means that when you add them, they will also be equal to 180 degrees. Philosophy is a game with objectives and no rules. Mathematics is a game with rules and no objectives.
2 Answers | Add Yours When you set up an experiment, you have to have one set up where you introduce the independent variable (the experimental set up) and one where you do not (the control). That way, you can compare the two and see the impact of the independent variable. As the website I have linked to says: The scientist must contrast an “experimental group” with a “control group”. The two groups are treated EXACTLY alike except for the ONE variable being tested. Sometimes several experimental groups may be used. For example, if you want to determine the impact of adding a kind of fertilizer to your crops, you have to do it like this: You need one plot of plants that get the fertilizer, but you also need one plot that does not get fertilized. Then you see how much they each produce and compare. By having the control group, you can see how much of an effect the fertilizer (the independent variable) had. If you had no control, you would not know how much impact the fertilizer had. When using the scientific method to solve a problem or question, one formulates an educated guess or hypothesis which is a possible answer to the problem or question. In order to test your hypothesis, you perform an experiment. An experiment is set up to test the effect of a variable on an outcome. This variable that the experimenter manipulates is known as the independent variable. If some change occurs as a result of the independent variable, this change is the dependent variable. If you set up an experiment, you need a standard of comparison to see if any change occurs as a result of your independent variable. Therefore, you divide your experiment into two groups. The experimental group and the control group. The experimental group or the manipulated group, gets the variable you are testing. However, the control group, does not get the variable. At the end of the experiment, you compare the data from both groups to see if your hypothesis was correct or not. For example, if I want to test the hypothesis that plants grow best if given a new plant food, then, if I have 500 plants for instance, I would divide my plants into two groups of 250 plants. The first 250 plants, or the experimental group would be given the plant food. The presence of this plant food is the independent variable I am manipulating. The other 250 plants are the control group. They will not receive any plant food. I will collect data for several days or weeks. This data would be measurements of the height of my plants in both groups. All other variables must be the same in both groups--amount of light, water, type of pots, soil, etc. The only variable that is different would be the plant food given to the experimental group. At the end of the study, I will analyze the data, and see if I will accept or reject my hypothesis. A good experiment should be re-tested several times to make sure the results are accurate! We’ve answered 328,081 questions. We can answer yours, too.Ask a question
Chemical reactions are the backbone of modern chemistry. These reactions are responsible for everything from the creation of life to the explosiveness of dynamite. At their core, chemical reactions are simply the rearrangement of atoms between molecules to make new molecules. One of the most important concepts in chemical reactions is stoichiometry. This is the study of the relationships between the reactants and products in a chemical reaction. By understanding these relationships, chemists can predict how much of each reactant is needed to produce a given amount of product. Stoichiometry is based on the principle of conservation of mass. This means that in any chemical reaction, the total mass of the reactants must be equal to the total mass of the products. This principle is what allows us to balance chemical equations, ensuring that the same number of atoms of each element are present on both sides of the equation. The study of stoichiometry is vital in many fields, including medicine, agriculture, and engineering. In medicine, for example, stoichiometry is used to determine the correct dosage of a drug to ensure that it is effective without being toxic to the patient. In agriculture, stoichiometry is used to optimize fertilizer application to crops, ensuring that they receive the correct balance of nutrients. Chemical reactions and stoichiometry can be incredibly fascinating and rewarding fields to study. They offer insights into the fundamental workings of the world around us and have countless practical applications that improve our daily lives. Whether you’re interested in chemical engineering, medicine, or simply want to understand the world on a deeper level, the study of chemical reactions and stoichiometry is an excellent place to start.
Social research is a systematic and empirical investigation that seeks to understand and explain various aspects of human behavior, interactions, and social phenomena. It involves the application of scientific methods to collect and analyze data in order to generate knowledge about social issues and patterns. For example, let’s consider a research project on the effects of social media on self-esteem among teenagers. The researchers may begin by formulating research questions, such as “How does the use of social media impact the self-esteem of teenagers?” To gather data, they could design surveys or conduct interviews to measure teenagers’ social media usage, self-esteem levels, and any associated factors such as comparison with peers or exposure to idealized body images. Next, the researchers would collect the data from a representative sample of teenagers, ensuring diversity in age, gender, and socioeconomic backgrounds. They would then analyze the data using statistical techniques to identify patterns, correlations, and potential causal relationships between social media use and self-esteem. In the analysis phase, researchers would interpret the findings, considering limitations and potential biases. They may discover that excessive social media use is negatively correlated with self-esteem, particularly in cases where teenagers engage in upward social comparisons or experience cyberbullying. These insights could help identify potential interventions, such as promoting media literacy, fostering positive online environments, or encouraging offline activities to improve self-esteem among teenagers. Social research, in this case, provides evidence-based knowledge about the effects of social media on teenagers’ self-esteem, which can inform policies, interventions, and educational programs aimed at promoting healthy social media use and well-being among adolescents. Characteristics of Social Research Systematic Approach: Social research follows a systematic and structured process, involving clear steps and procedures to ensure the collection, analysis, and interpretation of data are conducted in a rigorous and organized manner. Empirical Investigation: Social research is based on empirical evidence, relying on the collection of data from real-world observations or experiences rather than relying solely on theoretical or speculative assumptions. Objectivity: Social research strives to maintain objectivity by minimizing bias and personal opinions in the collection, analysis, and interpretation of data. Researchers aim to gather and analyze data in an impartial and unbiased manner to ensure the findings are reliable and valid. Ethical Considerations: Social research adheres to ethical principles and guidelines to protect the rights, privacy, and well-being of research participants. Researchers obtain informed consent, ensure confidentiality, and minimize any potential harm or risks to participants during the research process. Replicability: Social research should be replicable, meaning that other researchers should be able to follow the same procedures and obtain similar results when conducting similar studies. This allows for the verification and validation of research findings, strengthening the overall body of knowledge. Use of Scientific Methods: Social research employs scientific methods and techniques, such as surveys, experiments, interviews, observations, and statistical analysis, to collect and analyze data. These methods help ensure rigor, validity, and reliability in the research process. Theory and Hypothesis Testing: Social research often involves testing theories or hypotheses derived from existing knowledge or previous research. Researchers formulate specific hypotheses that can be tested using empirical data, allowing for the confirmation, modification, or rejection of theoretical explanations. Contextual Understanding: Social research aims to understand social phenomena within their specific social, cultural, and historical contexts. It recognizes the influence of various social factors, such as culture, gender, class, and power dynamics, in shaping human behavior and interactions. Practical Application: Social research is often conducted with the intention of generating knowledge that can be applied to real-world problems and inform decision-making processes. Research findings may be used to develop policies, interventions, or programs aimed at addressing social issues and improving societal well-being. Continuous Learning and Improvement: Social research is an iterative process that allows for continuous learning and improvement. Researchers critically evaluate their own methods, assumptions, and findings, contributing to the advancement and refinement of social research methodologies and theories. Sampling: Social research often involves selecting a representative sample from a larger population. The sample should reflect the characteristics of the population under study to ensure generalizability of findings. Careful consideration is given to sampling techniques, sample size, and potential biases in order to obtain reliable and valid results. Data Collection Methods: Social research employs various methods to collect data, such as surveys, interviews, focus groups, participant observation, and archival research. Researchers carefully select the most appropriate methods based on the research questions, objectives, and the nature of the phenomenon being studied. Data Analysis: Social research involves the systematic analysis of collected data to identify patterns, relationships, and trends. Statistical techniques, qualitative analysis, or a combination of both may be used to make sense of the data and draw meaningful conclusions. Iterative Nature: Social research often involves an iterative process, with the possibility of refining research questions, modifying research methods, or adjusting hypotheses based on initial findings. This iterative approach allows for deeper exploration and greater insights into the research topic. Interdisciplinary Approach: Social research often draws upon knowledge and theories from multiple disciplines, such as sociology, psychology, anthropology, economics, and political science. By integrating insights from various fields, researchers can develop a more comprehensive understanding of complex social phenomena. Contextual Sensitivity: Social research recognizes the importance of understanding social phenomena within their specific contexts. Researchers consider the social, cultural, economic, and historical factors that shape the behaviors and experiences of individuals and groups. Peer Review: Social research undergoes a rigorous process of peer review, where experts in the field critically evaluate the research methods, analysis, and conclusions. This process ensures the quality and validity of research findings and helps maintain high standards within the scientific community. Cumulative Knowledge: Social research contributes to the accumulation of knowledge in a particular field or topic. Research findings are built upon previous studies, theories, and empirical evidence, fostering an ongoing dialogue and advancement of understanding within the social sciences. Reflexivity: Social research acknowledges the potential influence of the researcher’s own background, perspectives, and biases on the research process and findings. Researchers engage in reflexivity, critically reflecting on their own assumptions and values to minimize potential bias and enhance the objectivity of the study. Communication and Dissemination: Social research involves effectively communicating research findings to relevant stakeholders, such as policymakers, practitioners, and the general public. Dissemination can take various forms, including academic publications, reports, presentations, and public engagement activities, to ensure the accessibility and applicability of research findings. Purposes of Social Research The purposes of social research can be broadly categorized into the following- Exploration: Social research aims to explore and gain a deeper understanding of various social phenomena, issues, and trends. It seeks to uncover new knowledge and insights by examining topics that have not been extensively studied before or by approaching existing topics from different perspectives. Description: Social research is conducted to provide a comprehensive and accurate description of social phenomena, behaviors, and characteristics. It involves systematically collecting data to portray the characteristics, patterns, and distributions of variables or factors related to the research topic. Explanation: Social research seeks to explain the reasons, causes, and relationships underlying social phenomena. It aims to uncover the mechanisms, processes, and factors that contribute to specific outcomes or behaviors, helping researchers understand why certain social patterns or behaviors occur. Prediction: Social research strives to develop predictive models or theories that can forecast future trends or outcomes. By analyzing historical data and identifying patterns, researchers can make informed projections about future social phenomena, behaviors, or events. Evaluation: Social research is often used to assess the effectiveness or impact of social policies, programs, interventions, or initiatives. It involves gathering data to evaluate the outcomes, effects, strengths, weaknesses, and unintended consequences of social interventions, enabling decision-makers to make informed judgments about their implementation and effectiveness. Social Change: Social research can contribute to social change by informing and advocating for policy reforms, social justice, and improvements in social systems. By highlighting social inequalities, discrimination, or systemic issues, research can raise awareness and provide evidence to support efforts aimed at addressing social problems and promoting positive societal transformations. Criteria of a Good Research A good research study typically exhibits the following criteria- Validity in research refers to the extent to which a study measures or captures what it intends to measure or capture. It ensures that the data collected and the findings accurately reflect the research objectives, concepts, and constructs under investigation. Validity is crucial because it determines the accuracy and soundness of the study’s conclusions and the degree to which they can be generalized to the broader population or context. There are different types of validity that researchers consider when assessing the validity of a study- -Construct validity refers to the extent to which the operationalized variables or measures in a study accurately represent the underlying theoretical constructs or concepts. It involves ensuring that the measurements or indicators used in the study align with the intended theoretical concepts and accurately capture the phenomena being studied. -Internal validity pertains to the degree to which a research study establishes a causal relationship between variables. It focuses on the extent to which the observed effects can be attributed to the manipulated independent variable, while controlling for other potential factors or confounding variables that may influence the results. -External validity concerns the generalizability of research findings beyond the specific study sample or context. It assesses the extent to which the results can be applied or generalized to a broader population or real-world settings. External validity is enhanced when the study sample and conditions closely resemble the target population or context of interest. -Content validity is relevant for studies that use surveys, questionnaires, or other measurement instruments. It refers to the extent to which the items or questions in the instrument adequately cover the full range of the construct being measured and represent the relevant content domain. -Face validity refers to the subjective judgment of whether a measurement or research design appears to be valid based on its face value. It involves a preliminary assessment of whether the study’s measures or methods seem appropriate for capturing the intended phenomenon. To ensure validity in a research study, researchers employ various strategies such as using established measurement scales or instruments with demonstrated validity, conducting pilot studies to refine measurement procedures, employing appropriate research designs and statistical analyses, and critically examining the logical and theoretical connections between variables and constructs. By attending to validity, researchers increase the confidence in the accuracy and meaningfulness of their findings, strengthening the overall quality and impact of the research. Reliability in research refers to the consistency and stability of research findings. A good research study demonstrates reliability by producing consistent results when the study is repeated under similar conditions or using similar methods. Reliability is essential because it ensures that the measurements, data collection procedures, and analysis techniques are dependable and can be trusted. There are several types of reliability that researchers consider when assessing the reliability of a study- –Test-retest reliability assesses the consistency of measurements over time. It involves administering the same measure or test to the same group of participants on two separate occasions and examining the degree of correlation between the scores obtained. A high correlation indicates a high level of test-retest reliability. –Inter-rater reliability pertains to the consistency of measurements when different raters or observers are involved. It is relevant in studies where multiple researchers independently assess the same phenomena or rate the same behaviors. Inter-rater reliability is determined by examining the agreement or correlation between the ratings provided by different observers. –Internal consistency reliability assesses the degree of agreement or consistency among the items or questions within a measurement instrument. It is particularly relevant in studies that use scales or questionnaires. Internal consistency reliability is often measured using techniques such as Cronbach’s alpha, which indicates how closely related the items are within the instrument. – Parallel forms reliability examines the consistency of measurements obtained from different but equivalent forms of a measurement instrument. Researchers administer two versions of the instrument to the same group of participants and assess the degree of correlation between the scores obtained from each version. To ensure reliability in a research study, researchers employ various strategies such as using standardized and validated measurement instruments, implementing clear and consistent data collection procedures, providing detailed instructions to participants and raters, conducting pilot studies to identify and address any sources of inconsistency or ambiguity, and using appropriate statistical techniques to assess reliability. By attending to reliability, researchers increase the confidence in the stability and consistency of their findings, allowing for more robust conclusions and generalizability of the research results. Objectivity in research is the principle of minimizing bias or personal influence in all aspects of the research process, including study design, data collection, analysis, and interpretation. A good research study aims to maintain objectivity by adhering to rigorous methods and practices that prioritize the use of solid evidence and logical reasoning rather than personal opinions or preconceived notions. Study Design: Objectivity begins with the design of the research study. Researchers should strive to develop a study design that minimizes potential biases and ensures the collection of unbiased data. This may involve selecting appropriate research methods, sampling techniques, and control groups to reduce confounding factors and increase the validity of the study. Data Collection: Objectivity in data collection involves implementing standardized procedures, protocols, and guidelines to ensure consistency and minimize personal biases. Researchers should clearly define data collection methods, train data collectors to follow standardized procedures, and use reliable and valid measurement instruments. Objective data collection helps to ensure that the data accurately reflect the phenomenon being studied. Analysis: Objectivity in data analysis involves employing rigorous and transparent analytical techniques that minimize the influence of personal biases. Researchers should use appropriate statistical methods and software to analyze the data objectively. It is crucial to document the analysis process, including any decisions or assumptions made, to enhance transparency and reproducibility. Interpretation: Objectivity in interpretation entails avoiding personal biases or preconceived notions when interpreting the research findings. Researchers should critically examine the data and objectively analyze the results in the context of existing knowledge and theoretical frameworks. The interpretation should be based on the evidence provided by the data and supported by logical reasoning. Peer Review: Peer review plays a vital role in maintaining objectivity in research. By submitting research findings to the scrutiny of other experts in the field, researchers ensure that their work undergoes critical evaluation and validation. Peer reviewers provide valuable feedback, challenge potential biases, and contribute to the objectivity and integrity of the research. Transparency and Documentation: Objectivity is supported by transparency and thorough documentation of the research process. Researchers should provide detailed descriptions of the research methods, procedures, and analytical techniques used. By making the research process transparent and documenting all steps taken, researchers enable others to assess the objectivity of their work and replicate the study if needed. By maintaining objectivity throughout the research process, researchers enhance the credibility, reliability, and validity of their findings. Objectivity ensures that research is based on solid evidence, contributes to the body of knowledge, and has practical applications. It fosters trust in the research community and enables evidence-based decision-making in various fields. Generalizability in research refers to the extent to which the findings of a study can be validly applied or generalized to a larger population or broader context beyond the specific study sample. A good research study strives to provide findings that are representative and applicable to a wider range of individuals, groups, or settings. Sampling: Generalizability begins with the selection of an appropriate study sample. Researchers should aim to use a sample that is representative of the target population of interest. This involves employing sampling techniques that minimize bias and increase the likelihood of selecting a diverse and representative sample. Random sampling or stratified sampling methods are commonly used to enhance the generalizability of the findings. Sample Size: The size of the study sample also influences generalizability. A larger sample size increases the statistical power of the study and allows for more accurate estimates of population characteristics. Researchers should aim to have a sample size that is adequate to make meaningful inferences and generalize the findings to the target population. Study Context: Generalizability also depends on the similarity between the study context and the broader population or settings of interest. Researchers should carefully consider the context in which the study was conducted and assess its relevance to other contexts. This includes examining the characteristics, demographics, and relevant contextual factors of the study sample and determining the extent to which they align with the larger population or relevant settings. Replication: The replication of research findings in different studies or across different contexts strengthens generalizability. When multiple independent studies produce consistent results, it increases confidence in the applicability of the findings beyond the original study. Researchers should encourage replication studies to further validate and enhance the generalizability of their findings. Contextual Factors: Generalizability is also influenced by the contextual factors that may impact the applicability of the findings. Researchers should identify and acknowledge the specific contextual factors that may limit or enhance the generalizability of their study. These factors may include cultural differences, geographical variations, temporal changes, or specific characteristics of the target population. External Validation: External validation involves comparing the findings of a study with other existing studies or established theories to assess their compatibility and generalizability. Researchers should examine how well their findings align with previous research and theories in the field, and ensure that their study contributes to the existing body of knowledge. While complete generalizability to all populations or contexts is often challenging, a good research study aims to provide findings that have a high degree of generalizability by employing appropriate sampling techniques, considering the study context, conducting replications, and acknowledging the limitations and contextual factors that may affect generalizability. Enhancing generalizability allows for a broader application of the research findings, increasing their relevance and impact. Methodological rigor in research refers to the thoroughness, accuracy, and precision with which the research study is designed, conducted, and analyzed. A good research study demonstrates methodological rigor by employing appropriate and robust research methods and techniques, ensuring the reliability and validity of the study’s findings. Research Design: Methodological rigor begins with a clear and well-designed research plan. Researchers should carefully select the most appropriate research design based on the research objectives and the nature of the research question. Common research designs include experimental, quasi-experimental, correlational, or qualitative designs. The research design should be aligned with the research goals and allow for the appropriate testing of hypotheses or exploration of phenomena. Sampling Techniques: Methodological rigor involves the use of appropriate sampling techniques to select participants or cases for the study. Researchers should select a sample that is representative of the target population or relevant group. Random sampling, stratified sampling, or purposive sampling methods are commonly employed to ensure a diverse and representative sample. Data Collection Procedures: Methodological rigor requires rigorous and standardized data collection procedures. Researchers should clearly define and document the data collection methods, ensuring that they are aligned with the research objectives and the research design. This includes developing reliable measurement instruments, establishing clear protocols for data collection, and training data collectors to maintain consistency and minimize biases. Data Quality and Validity: Methodological rigor entails ensuring the quality and validity of the data collected. Researchers should implement strategies to enhance data quality, such as conducting pilot studies to refine data collection procedures, ensuring appropriate data coding and entry, and employing quality checks and validation measures. Researchers should also consider the validity of the measurements and use established and validated instruments or develop rigorous measures to capture the variables of interest. Data Analysis Techniques: Methodological rigor involves the use of appropriate and sound data analysis techniques. Researchers should employ statistical or qualitative analysis methods that are suitable for the research design and the nature of the data collected. The chosen analysis techniques should align with the research objectives and allow for the appropriate testing of hypotheses or exploration of research questions. Internal and External Validity: Methodological rigor includes considerations of internal and external validity. Internal validity relates to the extent to which the study provides evidence of a causal relationship between variables, while external validity concerns the generalizability of the findings beyond the specific study sample. Researchers should address potential threats to internal validity, such as confounding variables or selection biases, and consider the factors that may influence the external validity of the findings. Research Ethics: Methodological rigor also encompasses adherence to ethical principles and guidelines in conducting the research. Researchers should obtain informed consent from participants, protect their privacy and confidentiality, minimize potential harm or discomfort, and ensure the study is conducted ethically and responsibly. By maintaining methodological rigor, researchers increase the trustworthiness and credibility of their findings. Methodologically rigorous studies provide robust evidence, contribute to the advancement of knowledge, and enable confident conclusions and practical implications. Contribution to Knowledge Contribution to knowledge in research refers to the extent to which a study adds value and expands the existing understanding of a particular research topic. A good research study makes a meaningful contribution by addressing a significant research gap, advancing theoretical understanding, challenging existing assumptions, or providing practical implications for real-world applications. Research Gap: A strong contribution to knowledge begins with identifying and addressing a significant research gap. Researchers should critically review the existing literature and identify areas where knowledge is lacking or incomplete. By identifying a research gap, the study can provide new insights or fill existing knowledge voids. Novelty and Originality: A good research study adds value by introducing novel and original ideas, concepts, or approaches. It goes beyond replicating existing studies and brings new perspectives to the research topic. By presenting unique findings or proposing innovative methodologies, the study contributes to the expansion of knowledge. Theoretical Advancement: Contribution to knowledge involves advancing theoretical understanding in the field. Researchers should strive to develop or refine existing theoretical frameworks, models, or concepts. The study can propose new theoretical perspectives, provide empirical evidence to support or challenge existing theories, or integrate various theories to develop a more comprehensive understanding of the research topic. Empirical Evidence: A valuable contribution to knowledge is based on strong empirical evidence. Researchers should use robust research methods and gather reliable data to support their findings. By conducting rigorous data analysis and presenting compelling evidence, the study enhances the credibility and reliability of its contribution. Challenging Assumptions: A good research study challenges existing assumptions, paradigms, or beliefs. It critically examines established theories or practices and offers alternative explanations or perspectives. By questioning prevailing assumptions, the study encourages critical thinking and promotes intellectual debate within the field. Practical Implications: Contribution to knowledge includes providing practical implications or recommendations for real-world applications. The study should offer insights that can be translated into practical strategies, policies, or interventions. By highlighting the practical significance of the research findings, the study becomes more relevant and impactful. Replicability and Generalizability: A valuable contribution to knowledge is one that can be replicated and generalized beyond the specific study. Researchers should provide detailed descriptions of their research methods, data collection procedures, and analysis techniques to facilitate replication by other researchers. The study should also consider the external validity of the findings and discuss the potential generalizability to other populations or contexts. By making a meaningful contribution to knowledge, a research study expands the understanding of the research topic, guides future research directions, informs decision-making, and has the potential to drive positive change in the field. It strengthens the body of knowledge and contributes to the advancement of the respective discipline or area of study. Clarity of Communication Clarity of communication in research refers to the ability of a study to effectively convey its research question, objectives, methods, findings, and conclusions in a clear and understandable manner. A good research study ensures that the information is presented in a way that is organized, accessible, and comprehensible to both experts in the field and a wider audience. Research Question and Objectives: A study demonstrates clarity of communication by clearly stating its research question and objectives. The research question should be concise and specific, reflecting the focus of the study. The objectives should outline the specific goals and intentions of the research, guiding the entire study. Methodology: Clarity of communication is evident in the description of the research methods. The study should provide a clear and detailed explanation of the research design, sampling techniques, data collection procedures, and data analysis methods. It should enable the reader to understand how the study was conducted and how the data were collected and analyzed. Findings and Results: A good research study communicates its findings and results in a clear and transparent manner. The study should present the data analysis outcomes and statistical results, if applicable, in an organized and comprehensible format. It should explain the key findings, highlight significant patterns or relationships, and provide relevant statistical or qualitative evidence to support the conclusions. Organization and Structure: Clarity of communication is facilitated by a well-organized and structured presentation of the research study. The study should follow a logical flow, with clear sections or chapters that introduce the research, provide background information, describe the methodology, present the findings, and conclude with the study’s implications and limitations. Each section should be coherent and interconnected, allowing the reader to follow the research process smoothly. Language and Writing Style: A good research study uses clear and concise language that is easily understandable by the target audience. It avoids unnecessary jargon, technical terms, or excessive use of acronyms. The writing style should be formal and objective, conveying the information accurately and professionally. Visual Aids: Clarity of communication can be enhanced through the use of visual aids such as tables, graphs, charts, and diagrams. These aids can help illustrate complex data or relationships, making the information more accessible and comprehensible to the reader. Visual aids should be appropriately labeled and referred to in the text. Accessibility and Audience Considerations: A good research study takes into account the audience it aims to communicate with. It considers the background knowledge and expertise of the readers and adjusts the level of technicality and complexity accordingly. The study should strive to be accessible to a wider audience, providing sufficient explanations and context to facilitate understanding. By ensuring clarity of communication, a research study effectively conveys its research question, methods, findings, and conclusions to its intended audience. This facilitates the dissemination and understanding of the research, promotes engagement with the study, and allows for the application of the findings in both academic and practical contexts. Transparent and Replicable Transparency and replicability are important characteristics of a good research study, ensuring that the study can be understood, scrutinized, and replicated by other researchers. By promoting transparency, the study provides detailed information about the research methods, data collection procedures, and analysis techniques used, allowing others to reproduce the study’s procedures and verify the findings. Here are some details regarding transparency and replicability in research: Research Methods: A good research study clearly describes the research methods employed. This includes providing an in-depth explanation of the research design, including any experimental or control conditions, and justifying why a particular approach was chosen. The study should outline the steps taken to address potential biases, confounding variables, or limitations inherent in the chosen methods. Data Collection Procedures: Transparency is demonstrated by providing detailed information about the data collection procedures. The study should describe the data sources, such as surveys, interviews, observations, or secondary data, and provide information about the sample size, sampling techniques, and any participant selection criteria. It should also include a clear description of the data collection instruments, such as questionnaires or interview protocols, including any modifications or adaptations made. Data Preprocessing: To promote replicability, the study should provide information about the preprocessing steps conducted on the data. This includes any data cleaning, data transformation, or data reduction techniques applied. Detailed descriptions should be given regarding any decisions made during data preprocessing, such as outlier removal, missing data handling, or data aggregation. Analysis Techniques: A transparent study describes the specific analysis techniques used to analyze the data. This includes providing information about the statistical tests, software, or algorithms employed, along with the rationale for their selection. The study should explain how variables were measured, operationalized, and analyzed, ensuring that the analytical methods are appropriate and reliable. Replication Materials: To facilitate replicability, a good research study should make replication materials available. This includes providing access to research instruments, such as questionnaires or interview protocols, as well as any coding schemes, algorithms, or computer code used for data analysis. By sharing these materials, other researchers can reproduce the study’s procedures and verify the reported findings. Open Data and Open Access: To enhance transparency and replicability, the study can consider making the data openly available and accessible to other researchers. Open data allows for independent verification of the findings and enables further analysis or reanalysis. Additionally, open access publication ensures that the study’s findings are freely available to the wider research community, promoting transparency and facilitating future research. Methodological Limitations: Transparency involves acknowledging the limitations and potential biases of the research study. The study should discuss any limitations in the research design, sample selection, data collection, or analysis techniques, as well as any potential sources of bias or confounding factors. By openly acknowledging these limitations, the study encourages critical evaluation and provides insights into the potential impact on the findings. By promoting transparency and replicability, a research study allows for the evaluation and verification of its procedures and findings by other researchers. This fosters scientific progress, builds upon existing knowledge, and contributes to the robustness and credibility of the research field. Critical evaluation in research involves a thoughtful and analytical examination of the study’s strengths, weaknesses, limitations, and implications. A good research study demonstrates critical thinking by acknowledging the complexity of the research topic, considering alternative explanations, and engaging in a rigorous evaluation of its own design, methods, and findings. Here are some details regarding critical evaluation in research: Research Design: A critical evaluation involves assessing the appropriateness and strengths of the research design. The study should consider whether the chosen design effectively addresses the research question and objectives, and whether alternative designs could have yielded different results. It should critically analyze the advantages and limitations of the chosen design, such as potential biases, confounding variables, or limitations in causal inference. Data Collection Methods: Critical evaluation encompasses an examination of the data collection methods employed. The study should assess the strengths and weaknesses of the chosen methods and consider whether alternative methods could have provided additional insights. It should critically analyze the reliability and validity of the data collection instruments, potential sources of measurement error, and any limitations in capturing the full complexity of the research topic. Data Analysis Techniques: A good research study critically evaluates the data analysis techniques used. It should consider the appropriateness of the chosen techniques in addressing the research question and objectives, as well as any limitations or assumptions associated with the selected methods. The study should engage in a thoughtful evaluation of the robustness of the statistical or qualitative analysis, considering alternative approaches or sensitivity analyses. Limitations and Bias: Critical evaluation involves acknowledging the limitations and potential biases of the study. The study should openly discuss any methodological limitations, sample biases, or confounding factors that may have influenced the findings. It should critically analyze potential sources of bias, such as selection bias, measurement bias, or researcher bias, and consider their impact on the validity and generalizability of the results. Alternative Explanations: A critical evaluation considers alternative explanations for the findings. The study should explore potential alternative hypotheses or interpretations that could account for the observed results. It should critically analyze competing theories or explanations and provide a thoughtful discussion of why the chosen interpretation is the most plausible based on the available evidence. Implications and Significance: Critical evaluation involves a thoughtful examination of the implications and significance of the study’s findings. The study should critically analyze the practical, theoretical, or policy implications of the research results. It should consider the broader context and relevance of the findings, identifying potential areas for further research or future applications. Reflective Analysis: A good research study engages in reflective analysis, critically evaluating the strengths and weaknesses of the study as a whole. It should discuss lessons learned, challenges encountered, and areas for improvement in future research. This reflective analysis demonstrates a commitment to continuous learning and improvement in the research process. By demonstrating critical evaluation, a research study engages in a rigorous examination of its own design, methods, and findings. It acknowledges the complexity and limitations of the research process, promotes intellectual humility, and contributes to the advancement of knowledge by identifying areas for improvement and guiding future research directions. These criteria collectively contribute to the quality, integrity, and impact of a good research study, ensuring that it generates reliable, valid, and meaningful knowledge in the respective field of study. In conclusion, social research serves as a powerful tool for understanding and investigating various social phenomena. Its characteristics, including validity, reliability, objectivity, generalizability, methodological rigor, contribution to knowledge, clarity of communication, transparency, and critical evaluation, collectively contribute to the quality and credibility of a research study. By upholding these characteristics, researchers can ensure that their studies are well-designed, accurate, and impactful. Through valid and reliable research methods, researchers can accurately measure and capture the concepts and constructs under investigation. By striving for objectivity, they minimize personal bias and subjective interpretations. Generalizability enables the application of research findings to broader populations or relevant contexts, increasing the study’s relevance and practical implications. Methodological rigor ensures the use of appropriate research methods and techniques, enhancing the study’s credibility and validity. Additionally, a good research study makes a meaningful contribution to existing knowledge by addressing research gaps, challenging assumptions, expanding theoretical understanding, and providing practical implications. Clarity of communication ensures that the research question, objectives, methods, findings, and conclusions are effectively conveyed to both experts and a wider audience. Transparency and replicability enable other researchers to verify and build upon the study’s findings. Finally, critical evaluation demonstrates intellectual rigor by acknowledging limitations, considering alternative explanations, and evaluating the strengths and weaknesses of the research design, methods, and findings. By engaging in critical evaluation, researchers contribute to the growth of knowledge and guide future research directions. In summary, understanding and embodying these characteristics are essential for conducting high-quality social research that advances our understanding of human behavior, society, and the world we live in. By upholding these principles, researchers contribute to the integrity and progress of the social sciences and their potential to make positive impacts on individuals and communities.
As you learn about chemistry, you might come across molecules that appear different but have the same formula. This is called isomerism. Isomers are molecules that share a formula but have diverse structures, which results in distinct physical or chemical characteristics. There are two main types of isomerism: structural isomerism and stereoisomerism. Keep reading to find out more! Learn about isomerism and how molecules with the same formula can look different but have different properties. Butane and 2-methylpropane have the same molecular formula, C4H10, but different arrangements of atoms. This type of difference between molecules is known as structural isomerism. Structural isomers have the same molecular formula but different arrangements of atoms. For example, butane has a straight or continuous chain of four carbon atoms, while 2-methylpropane has a branched chain with a carbon atom in the center of the Lewis structure bonded to three other carbon atoms. This difference in structure leads to different physical and chemical properties for the two compounds. When it comes to structural isomers, there are three main types: chain isomers, positional isomers, and functional group isomers. Chain isomers have the same molecular formula but different arrangements of the carbon chain. For example, butane and 2-methylpropane are chain isomers. Positional isomers have the same molecular formula and the same carbon chain, but the position of the functional group is different. For example, 1-propanol and 2-propanol are positional isomers. Both have the molecular formula C3H8O, but the hydroxyl group (-OH) is attached to a different carbon atom in each compound. Functional group isomers have the same molecular formula but different functional groups. For example, ethanol and dimethyl ether (CH3OCH3) are functional group isomers. Both have the molecular formula C2H6O, but ethanol has an -OH group while dimethyl ether has an ether (-O-) functional group. These three types of structural isomers demonstrate how small changes in the arrangement of atoms can lead to different chemical and physical properties. No, the above molecule is not a chain isomer of butane and methylpropane. It is simply n-butane rotated about the central carbon, which is known as a 'false' isomer. Both molecules have the same carbon chain - no carbon atom is bonded to more than two other carbon atoms. Positional isomers, on the other hand, are structural isomers that can be viewed as differing only on the location of a functional group, substituent, or some other feature on a "parent" structure. Positional isomerism refers to molecules that have the same functional group in a different position on the same carbon chain. For example, propan-1-ol and propan-2-ol are positional isomers. Their carbon chains are the same, but the -OH group is attached to a different carbon in each case. Propan-1-ol and propan-2-ol are positional isomers because they share a molecular formula but their OH groups are in different positions, chemguide You may discover instances where both chain isomerism and positional isomerism are present. For example, the molecules isopentanol and pentan-3-ol show both chain and positional isomerism. Positional isomers also occur on benzene rings. For instance, the molecular formula C7H7Cl has four isomers depending on the position of the chlorine atom. Last but not least, let us consider functional group isomers! Functional group isomers have the same molecular formula, but have different functional groups. In other words, they belong to different homologous series. For example, molecules with the structural formula C3H6O could be propanal, propanone, or the alcohol 2-propen-1-ol. Identifying isomers can be tricky. With practice, you can get the hang of it. The following examples can help! When you draw structural isomers, you might come across structures that look different on paper but are actually false isomers. A clever trick to know whether you have drawn a true isomer is to name the structure using IUPAC rules. A true isomer will have a unique name. Try to find the three chain isomers of pentane, C5H12You might draw the straight-chain molecule first:CH3-CH2-CH2-CH2-CH3Next, draw the two branched-chain isomers. Watch out for 'false' isomers! It might help to make some models. CH3 |CH3-CH2-CH-CH3 CH3 |CH3-C-CH3 | CH3 Well done! You've drawn the structural isomers of pentane. Let us try another example. Draw structural isomers for the molecular formula C4H8Cl2 There are four carbons. Some isomers will have all four carbons in a straight chain. Others will be branched-chain isomers with three carbons in a straight chain, and a branch from the middle carbon. Also, consider the position of the chlorine atoms. For example, the two Cl atoms could both be attached to the first and second carbons. They could also be attached to different carbons such as C1 and C3. Below you can see all the structural isomers with the molecular formula C4H8Cl2. How many of them did you identify? Structural isomers for C4H8Cl2, Chegg You have seen how structural isomers have a different arrangement of atoms. We will now consider the second type of isomerism - stereoisomerism. A stereoisomer is a molecule with the same order of atoms, which has a different spatial arrangement of atoms. In stereoisomerism, molecules have the same order of atoms but different spatial arrangements. How is that possible? Consider that atoms bonded to a C=C double bond are planar (all on the same plane). C=C double bonds are rigid - the atoms bonded to them cannot rotate about them. However, the atoms can still pivot about any single bonds in the molecule. Restricted rotation about carbon-carbon double bonds causes what we know as E-Z isomers. Learn about planar molecules and molecular shapes in Shapes of Molecules. To determine the priority of groups in a molecule for the E-Z naming system, we use the Cahn-Ingold-Prelog (CIP) priority rules. These rules assign priority to substituents on a double bond based on their atomic number. The higher the atomic number, the higher the priority. To apply the CIP priority rules, we first assign a priority number (1-4) to each substituent on the double bond based on its atomic number. Then, we determine whether the high-priority groups are on the same side of the double bond (Z) or opposite sides (E). For example, let's consider the molecule 2-bromo-1-chloro-1-fluoroethene: Since the bromine and chlorine atoms are on the same side of the double bond, this molecule is a Z-isomer. The CIP priority rules can be used to determine the E-Z isomerism of more complex molecules as well, such as those with multiple double bonds or functional groups. While the above explanation refers to the E-Z isomerism of double bonds, the Cahn-Ingold-Prelog priority rules are used to determine the absolute configuration of chiral molecules. Chiral molecules are molecules that are non-superimposable on their mirror image, and they have at least one chiral center, also known as an asymmetric carbon. The Cahn-Ingold-Prelog priority rules assign a priority number (1-4) to each substituent on a chiral center based on its atomic number. The higher the atomic number, the higher the priority. To apply the rules, we first identify the chiral center in the molecule. Then, we assign priority numbers to the four substituents on the chiral center based on their atomic numbers. The substituent with the highest atomic number is assigned priority 1, the next highest is assigned priority 2, and so on. Next, we orient the molecule so that the lowest priority substituent is pointing away from us (into the page or screen). Then, we look at the remaining three substituents and determine their orientation in space. If the priority 2 and 3 substituents are oriented in a clockwise direction, the molecule has an R configuration (from the Latin rectus, meaning right). If they are oriented counterclockwise, the molecule has an S configuration (from the Latin sinister, meaning left). For example, let's consider the molecule 2-chlorobutane: Since the priority 2, 3, and 4 substituents are oriented counterclockwise, this molecule has an S configuration. The Cahn-Ingold-Prelog priority rules can be used to determine the absolute configuration of more complex chiral molecules as well. What about when a group of atoms is attached to the C=C bond? Easy! Focus on the atom directly connected to the C=C bond. Let us use a complicated molecule like the one below as an example. We can use CIP rules to identify this E-isomer, kpu pressbooks First, focus on the right-hand side: clearly, Cl has a higher priority than C in the CH2CH3 group. Next, consider the left-hand side: both groups have a carbon atom directly attached to the C=C bond. Which one has priority? In cases like this, we focus on the priorities of the next group of atoms directly attached to the two carbons. In the upper group, we have H, H, C but in the lower group, we have H, H, H. We use the atom with the greatest atomic number in each group. In this example, carbon has a higher atomic number. So the group H3CH2C has priority on the left-hand side. The two groups with priority, H3CH2C and Cl are across the C=C bond from each other. This means that this is an E-isomer. The complete name for the compound is (E)-3-chloro-4-methyl-3-hexene. If we use the cis-trans naming system on the above isomer, it would be a cis-isomer. So you see, E-isomers don't always correspond to trans-isomers! You have learned about two types of isomers - structural isomers and stereoisomers. You have also mastered how to draw structural isomers and how to name geometric isomers with Cahn-Ingold-Prelog rules. But wait! There is still one type of isomer we have not yet covered - optical isomers! Before we conclude, let us take a brief look at what they are. To add to the above explanation, optical isomerism arises from the presence of a chiral center in a molecule. A chiral center is an atom with four different substituents attached to it, creating a tetrahedral arrangement of atoms. This chiral center gives the molecule its handedness or chirality. When a molecule has a chiral center, it can exist in two mirror-image forms, called enantiomers. Enantiomers have identical physical and chemical properties except for their interaction with plane-polarized light. One enantiomer will rotate plane-polarized light clockwise, while the other enantiomer will rotate it counterclockwise. This property is called optical activity, and it can be measured using a polarimeter. Enantiomers have the same physical and chemical properties except for their interaction with plane-polarized light, chemguide Many biological molecules, such as amino acids, sugars, and enzymes, exist as enantiomers. In fact, living organisms typically use only one enantiomer of a chiral molecule, while the other enantiomer may be inactive or even toxic. This is why the synthesis of chiral drugs and other molecules is an important area of research in organic chemistry. In summary, optical isomerism arises from the presence of a chiral center in a molecule, creating two non-superimposable mirror-image forms called enantiomers. These enantiomers have identical physical and chemical properties except for their interaction with plane-polarized light, and they play an important role in biological processes and drug development. To add further, there are two types of stereoisomers: geometric isomers and optical isomers. Geometric isomers, also known as cis-trans isomers, arise from restricted rotation around a carbon-carbon double bond. In a cis isomer, the two highest priority groups are on the same side of the double bond, while in a trans isomer, they are on opposite sides. Optical isomers, also known as enantiomers, arise from the presence of a chiral center in a molecule. Enantiomers are non-superimposable mirror images of each other and have identical physical and chemical properties except for their interaction with plane-polarized light. One enantiomer will rotate plane-polarized light clockwise, while the other will rotate it counterclockwise. This property is called optical activity, and it can be measured using a polarimeter. It is important to note that while structural isomers have different physical and chemical properties due to their different structures, stereoisomers have identical physical and chemical properties except for their spatial arrangement of atoms. This can make them difficult to separate and distinguish from each other, but it also makes them useful in fields such as drug development, where the properties of one enantiomer may differ significantly from the other. What is geometrical isomerism? Geometric isomerism happens in molecules that have restricted rotation about C=C bonds. Geometric isomers can be either E-isomers or Z-isomers. E-isomers have the highest priority groups across the double bonds from each other. While Z-isomers have the highest priority groups both above or both below the double bond. What is optical isomerism? Optical isomerism is a type of stereoisomerism. Optical isomerism happens when molecules have the same order of atoms but are non-superimposable mirror images of each other. That is, they show chirality. What is isomerism in chemistry? Sometimes in chemistry molecules look different from each other but have the same molecular formula! We call this phenomenon isomerism. Isomers are molecules with the same molecular formula but different structures, which gives them different physical or chemical properties. What is linkage isomerism? Linkage isomerism is a type of structural isomerism. Linkage isomers have a ligand with multiple atoms connected to the central ion. The ligands must be ambidentate - only connected in one place. For example, the NO2- ion is an ambidentate ligand. It can only attach to the central ion through the nitrogen atom or the oxygen atom. Which complexes show optical isomerism? Optical isomers are non-superimposable mirror images of each other. Complexes whose mirror image is not superimposable are optical isomers. We can also identify complexes that show optical isomerism by looking at their plane of symmetry. Optical isomers do not show a plane of symmetry. Join Shiken For FREEJoin For FREE
Lift is the aerodynamic force generated by airfoils -- such as propellers, rotor blades and wings -- that occurs at a 90-degree angle to the oncoming air. With respect to rotor blades -- such as those found on a helicopter -- when the leading edge of the blade strikes the oncoming wind, the shape of the airfoil generates an area of high pressure directly below and an area of low pressure above the blade, resulting in lift. To determine the amount of lift generated by a rotor blade, we will use the lift equation L = ½ ρv2ACL. Understand each element of the limit equation L = ½ ρv2ACL. L signifies lift force, measured in Newtons; ρ signifies air density, measured in kilograms per cubic meter; v2 signifies true airspeed squared, which is the square of the speed of the helicopter relative to the oncoming air, expressed in meters per second. In the equation, A signifies rotor disk area, which is simply the area of the rotor blade, expressed in meters squared. CL signifies the dimensionless lift coefficient at a specific angle of attack, which is the angle between the chord line of the rotor blade -- an imaginary line drawn through the middle of an airfoil extending from the leading edge to the trailing edge -- and the oncoming air. CL is dimensionless, in that no units are attached to it; it is simply displayed as a number. Identify the values for each element of the lift equation. In the example of a small helicopter with two blades, the rotor disk travels at 70 meters per second (v). The coefficient of lift for the blades is 0.4 (CL). The planform area of the rotor disk is 50 meters squared (A). Assume international standard atmosphere, in which the density of air at sea level and 15 degrees Celsius is 1.275 kilograms per cubic meter (ρ). Plug the values you have determined in to the life equation and solve for L. In the helicopter example, the value for L should be 62,475 Newtons. The value for CL is typically determined experimentally, and cannot be determined unless you first know the value of L. The equation for the lift coefficient is as follows: CL = 2L/ρv2A.
Platelets are involved in the coagulation process. Platelets are represented by small, nucleus-free blood plasma cells. Their shape is ellipsoid, 1-4 microns in size. The formation of blood clots occurs in the tissues of the bone marrow. A platelet consists of a membrane and acts as a protective layer. Under this layer is the lipid layer, and inside the mitochondria. These cells constantly change and the life cycle is approximately 5-12 days. The destruction of old platelets occurs in the spleen. The main function of platelets is normal blood clotting. However, there are other equally important functions that nuclear-free cells perform: - Providing the cells in the blood vessels with nutrients. - Protection of vessel walls from damage. - Prevent blood loss. - Do not allow penetration of pathogenic microorganisms. When the vascular wall is damaged, platelet blockage is created by the platelet and helps block blood flow from the vessel. Since the platelet has processes, with their help the cells invade pathogenic pathogens and do not allow it to perform a negative function. These blood elements not only take part in the protection of the body, but can also cause the formation of blood clots when they are oversupplied. Diagnosis and rate indicator The blood for analysis taken from the finger The main way to diagnose the determination of the level of platelets and other blood elements is a general clinical analysis. If a relative has a history of thrombocytopenia, the doctor may suggest the hereditary nature of the disease in a child. The average number of platelets in children varies according to age. In newborns, the platelet count is normally 100-420 U / μl. For babies up to one year, the normal rate should be in the range of 150-350 U / μl. As children grow older, the level of blood cells increases: - From 1 to 5 years - 180-380 U / μl. - From 5 years to 10 years - 180-450 U / μl. - From 10 years to 16 years - 150-450 U / μl. Deviation from the norm is permissible in 10%. If the difference is more or less than the specified value, then this may indicate some pathologies and abnormalities in the functioning of the internal organs. If there is a deviation from the norm, additional methods are assigned to a higher or lower side: genetic tests, detection for the presence of antibodies, ultrasound diagnostics, x-rays, endoscopy. Everything you need to know about thrombocytopenia The lower the level of platelets in the blood, the more pronounced the symptoms Thrombocytopenia is characterized by a decrease in platelet levels. In childhood, a decrease in the number of platelets may be due to various diseases: - Acute infections The destruction of platelets can occur on the background of the immunopathological process, vasopathies, thrombosis in injuries, intoxication, an allergic reaction, lack of certain trace elements, etc. Thrombocytopenia may occur while taking certain medications: Glipizid, Phenobarbital, Levomycetin, Meprobamate, etc. A decrease in the concentration of platelets can be observed with severe stress, poisoning, and large blood loss. Signs of thrombocytopenia in children are different from those in adults. This condition manifests itself with the following symptoms: - Point rash on the skin. - Bleeding gums. - The appearance of bruises and smudges after a slight bruise or weak pressing. - Prolonged bleeding with minor damage to the integrity of the skin. - Frequent nasal bleeding lasting 15-20 minutes. Learn more about the causes of platelet lowering can be found in the video: A dangerous symptom of thrombocytopenia is the appearance of bruises on the face. This symptom indicates a very low platelet count. Such a phenomenon can cause a hemorrhage in the organs or brain. If there is a hemorrhage in the retina, then you can lose sight. With often bursting capillaries in the eyes, you should immediately contact an oculist. With a low level of these cellular elements, gastrointestinal hemorrhages may appear. Their occurrence is due to damage to the gastric mucosa solid food. In this case, children can detect bloody feces or vomiting with blood. The danger of this disease is that the listed symptoms appear without pain and other more serious signs that do not cause concern to parents. However, this phenomenon can lead to internal bleeding, which is life threatening. Why does thrombocytosis arise and how is it dangerous? There are primary, secondary and clonal thrombocytosis. Blood clotting disorders and the tendency to the formation of thrombosis due to thrombocytosis. This is a rare disease of the hematopoietic system, which is characterized by an increase in the number of platelets in the blood. In children, thrombocytosis can occur against the background of tumor processes in the hematopoietic system. Secondary thrombocytosis may occur against the background of existing chronic pathology. The main causes of development and factors that affect the increase in the level of platelets in the blood: - Infectious diseases. - Congenital or acquired blood disorders. - Prolonged bleeding - Acceptance of certain drugs. - Condition after surgery (after removal of the spleen). By external signs, high platelet count is difficult to determine. If the child complains of dizziness, weakness, itching, and there are swelling and pain in their place, then it is necessary to donate blood and check the level of blood cells. The risk of thrombocytosis is that when an excessive number of platelets accelerates the process of blood clotting. As a result, the plates stick together with each other, which leads to the formation of a thrombus. In the future, the probability of occlusion of the heart or brain vessels is great. How to normalize platelet count? Therapy depends on the cause of the indicator deviation. According to the results of the analysis, an appropriate treatment regimen is selected. To increase the level of platelets, you can prescribe the reception of immunoglobulins, corticosteroids. In advanced cases, platelet transfusions are performed. It is important to identify the cause of an increase or decrease in platelet levels. Eliminating the root cause, you can bring back to normal indicator. Mild thrombocytopenia treatment is not required. Immune or autoimmune thrombocytopenia is treated with glucocorticosteroids and immunosuppressive drugs. Thrombocytopenia against a viral infection is treated with antibiotics and immunostimulating drugs. Therapeutic treatment of thrombocytosis in children involves the use of nonsteroidal antiplatelet agents, if thrombosis is pronounced, then use anticoagulants. It should be noted that the duration of treatment of thrombocytopenia and thrombocytosis for at least six months. In addition to the necessary drugs, the doctor will prescribe a course of taking multivitamins and drugs to strengthen the body. Preventive measures prescribed by the doctor to keep thrombocytopenia in remission: - The child is prohibited from making any vaccines. If necessary, this is carried out in a hospital. - In order to prevent should be normalized diet baby. Compliance with the rules contributes to the proper development of blood cells and their sufficient number in the body. - It is important to strictly monitor the nutrition of the child and ensure adequate drinking regimen. - In the period of epidemics of infectious diseases it is necessary to limit the contact of the child with other people, thus avoiding various infections. - It should be at least 1 time per year to take a blood test. - It is also recommended to increase the consumption of fruits and vegetables, to prevent dehydration of the body, to control the iodine content in the body. The child should eat as much fruit drinks and compotes of blueberries, cranberries and lingonberries as possible. - In case of any indisposition or deterioration of the baby’s well-being, contact your pediatrician. By following these rules, you can prevent changes in the level of platelets in the blood. Why do we need platelets? Another name for platelets is blood platelets (or Bitszocero plaques). This name these cells received due to its structure. A platelet is a small plate whose size does not exceed 1.5-4 micrometers, separated by a megakaryocyte (a large cell from which bone marrow tissue is obtained). The shape of the calf plate is maintained by microscopic tubes located along the entire perimeter of the cell. The content of platelets in the blood averages from 60 to 75% of the total, the remaining part (about 25-40%) accumulates in the spleen. The number of platelets in the blood plasma can fluctuate both upwards and downwards throughout the day. But these fluctuations should not exceed 10%. If the difference during the day is more than the specified value, you can suspect a pathology or abnormalities in the work of internal organs. The main function that platelets perform in the human body is the formation of platelet clots, which are necessary to prevent blood loss, and participate in blood coagulation processes. The role of platelets in the functioning of the child's body Several functions can be distinguished for which a small body needs a certain number of platelets. These include: - repair damaged blood vessel cells, - participation in cell division, - transportation of immune complexes and their attachment to the cell membrane, - maintaining the contractility of the vascular walls due to serotonin, - isolation of proteins necessary for blood coagulation - the formation of blood clots (adhesion) necessary to stop the bleeding and reduce blood loss. The average number of platelets as the norm for children in the table The content of platelets in the blood of a child depends on many factors, for example, health status, dietary habits, age, etc. Age in determining compliance with the norms is the main criterion. The rate of platelets, depending on the age of the child (the table shows the values for healthy children!) How to determine the level of platelets in the blood? To determine the volume of blood cells, a complete blood count is used, which is done even for newborns 1-3 days after birth. Healthy children are advised to donate blood at least 1 time per year for the purpose of early detection of pathologies. With a hereditary predisposition to thrombocytopenia or thrombocytosis, the pediatrician may prescribe a complete blood count more often, for example, 2-4 times a year. Pathology of the bone marrow, as well as some diseases, require more frequent monitoring. Do I need special training before passing the analysis? To make the analysis as accurate as possible, the baby should be properly prepared for blood donation. To do this, follow these rules: - the fasting period must be at least 8 (for children under 7 years old) or 12 hours (for children over 7 years old), - The material should be collected in the morning (60-120 minutes after waking up), - drinking and eating food before the analysis is not allowed (you can give the crumbs some clean drinking water), - 1-2 days before donating blood for analysis it is necessary to limit physical activity and activity of the child (if the child goes to school, you should skip the physical education class), - In no case should you rub your fingers before entering the nurse's office, as this may affect the results of the analysis (for example, the level of leukocytes, etc.). How to determine? If a laboratory blood test revealed a significant decrease in the number of platelets compared with age norms, the child is diagnosed with "thrombocytopenia." Such a condition is dangerous for a small organism, since with epithelium damage or internal bleeding, no blood can coagulate, which leads to significant blood loss and a sharp deterioration in health (in some cases, even death). The only way to stop the bleeding can be a transfusion of donor blood with a high content of platelets. To prevent serious consequences, it is important to know the symptoms and signs of thrombocytopenia in children. These include: - frequent nosebleeds, - bleeding of the gums, independent of cleaning the teeth, - bruises and small hematomas on the body, - small rash that occurs mainly on the legs, ankles, hips and buttocks (occurs in 100% of children with this diagnosis), - blood streaks in feces and urine. Not always a decrease in platelets caused by internal pathologies. There are also external causes for which signs of thrombocytopenia can be observed in children. Reasons for reducing the number of platelets in childhood: - medication (allergic to active ingredients), - poor and unbalanced diet with reduced iron content, - general intoxication of the body (for example, in case of poisoning with poisons or heavy metals), - the presence of anti-platelet specific antibodies in blood cells, - bacterial and viral infections - Fanconi syndrome (hereditary form of thrombocytopenia), - disorders of the thyroid gland, - insufficient intake of folic acid with food. Decreased platelets in newborns and infants In children of the first year of life thrombocytopenia occurs mainly in the acute form. At the same time, the total number of platelets remains at the same level, but the level of platelet mass in plasma is sharply reduced. There are several reasons that can lower the number of blood cells. - Thrombocytopenia in the mother during pregnancy. The risk of the appearance of pathology in the neonatal period increases several times if the mother had problems with hemoglobin during pregnancy. This condition is especially dangerous for children in the first 7-10 days after birth, as it can cause sudden bleeding and death of the infant. - Prematurity or critically low birth weight. Nearly 60% of premature babies suffer from thrombocytopenia. Treatment in children of the first year of life (and especially in newborns) is carried out strictly in the hospital. Often in such cases, the mother is discharged from the maternity hospital alone, and the baby is left for treatment or transferred to a children's specialized hospital. - Development of antibodies to the mother against the platelets of the child. It is quite rare (about 5-7% of cases). As a result of the pathology, there is an enhanced disintegration of blood cells and a decrease in their numbers. - Viral infections (measles, rubella). How to increase the number of platelets in a child? The main importance in the treatment and prevention of thrombocytopenia is given to the normalization of the child's dietary regimen. To raise platelet levels, the following foods (rich in iron and folic acid) should be included in the baby’s diet: - nut fruits (cashews, walnuts, Brazilians), - ripe banana flesh (without signs of rotting), - cabbage, carrots, - pomegranates and pomegranate juice, - dried mountain ash and compotes from it, - beef and veal, - buckwheat groats (preferably green buckwheat, almost 3 times more vitamins and minerals than standard grains), - dog-rose fruit, - beet juice (you can add beetroot in the raw and boiled form), - leaves of young nettle (collected on clean soil), - all varieties of fish - beef liver, - parsley, cilantro, dill. Thrombocytosis in children Thrombocytosis is an increase in the number of platelets in the blood to values several times higher than normal. Most often, thrombocytosis is a consequence of a viral or bacterial infection. In severe cases, an increase in blood cells may indicate the development of cancer. How to determine: symptoms Symptoms of thrombocytosis are similar to the clinical manifestations of thrombocytopenia, although there are some differences. If your child has at least one of the listed symptoms, you should immediately consult a doctor and pass a blood test. Symptoms of thrombocytosis in childhood: - frequent and severe nosebleeds, - blue skin - the appearance of subcutaneous hemorrhages, - itching and burning in certain areas of the skin, - tingling sensation on the skin - cold hands and feet at a comfortable ambient temperature, - increased heart rate. The child may also complain of frequent headaches. In some cases, pressure surges and the formation of blood clots are possible. In order to correctly determine the cause of platelet elevation, it is necessary to diagnose exactly what type of thrombocytosis develops in a child. The treatment regimen and tactics of the small patient will depend on this. The role of platelets in the children's body При рождении ребенок получает свой уникальный состав крови, который во многом схож с материнским набором. Определить тромбоцитарный уровень позволит обычный анализ крови, который берут из пуповинной крови или из пальца. Стоит понимать, что With the gradual development of the systems and functions of the body, the quantitative and qualitative composition of blood may change.. This is due to the work and the formation of the adrenal glands, which hormones help to regulate the synthesis of platelet megakaryocytes. Therefore, throughout the period from birth to maturity, completely different indicators can be considered the norm. Consider an example how platelets in children depend on the developmental stages of the child: - At birth, the most primitive reflexes are preserved, and the body functions are only beginning to be established. Against the background of active growth and development, the spread in terms of the norm is maximum. - In the period from one year to three years, when a child actively learns about the world, learns to walk, talk, and realize himself as a person, the body is controlled by hormones, under the action of which there is a need for intensive production of platelets. This natural protective function will allow you to avoid heavy bleeding during falls (it is impossible to learn to walk without them), as well as protect you against brittle blood vessels, which manifests as hematomas with minor injuries. - When a child gets a little older, the normal values approach adults, however, in the period of hormonal adjustment (12-15 years) errors of 15-20% deviations in each direction are allowed. It is important to control the level of platelets in the first year of life, since it is during this period that the mechanisms for maintaining health and immunity are laid down. A blood test is capable of helping in this process, which is recommended to take at least 1 time per trimester. Values of the norm in children To find out which indicators are considered normal for a given age, a table is presented, where all data is displayed. Indicators are given in thousands per milliliter of blood. The presence of errors in childhood, as well as the increase in the gap in performance - this is a necessary measure, based on the results of a number of studies. This is especially true of the hormonal adjustment period, when in a healthy organism, with hormonal imbalance, there may be a difference in the number of platelets produced. Platelets in infants have the highest level of error, which indicates the immaturity of the bone marrow and hormonal system. Anna Ponyaeva. Graduated from Nizhny Novgorod Medical Academy (2007-2014) and Residency in Clinical Laboratory Diagnostics (2014-2016). Ask a question >> Deviations are considered indicators of the result of the analysis that go beyond the limits of generally established standards. If they are minor, then offer retake analysis. But if the platelet imbalance is supported by the presence of deviations from other blood fractions (leukocytes are elevated), there is a possibility of health problems. In most cases, an increase in the number of platelet cells is due to the presence of inflammatory process in the body, because their synthesis is articulated with the production of white blood cells. Platelets are a kind of leukocyte assistants that help strengthen the cell membranes and prevent the entry of pathogenic bacteria into them. This condition can be observed in acute respiratory infections and acute respiratory viral infections, but the reasons may be more serious. The fact is that a lot of diseases that develop against the background of enlarged platelets do not make themselves felt for quite a long time, asymptomatic. Problems are diagnosed by chance, when conducting a routine blood test. Causes and symptoms An elevated platelet count is called thrombocytosis, the causes of which may be the following manifestations: - oncological diseases associated with blood-forming organs, - the presence of chronic diseases that provoke inflammation not only of the organ itself, but also of the surrounding tissues: hepatitis, tuberculosis, cirrhosis of the liver, meningitis, - viral blood damage by meningococci, streptococci, staphylococci: osteomyelitis, gout, arthrosis, rheumatism, - iron deficiency - long-term use of antibacterial, antiviral and antifungal medications that affect the immune system. The primary symptoms of thrombocytosis have no external manifestations, but with the progression of the disease symptoms can be in such situations: - the presence of blood in the feces and urine, - itching without visible rash, - cold limbs even in the summer months - frequent headaches, dizziness and loss of consciousness without cause, - symptoms of vegetative-vascular dystonia, - tachycardia with progressive arrhythmia, - the presence of blood clots in the blood, externally manifested blue in a certain area of the skin, as well as a clear view of the outline of the vessels through the skin. Features of treatment and prevention As an auxiliary agent, blood thinners are used (Aspirin, Asparkam, Aspirin-Cardio, Cardiomagnyl). It must be remembered that diluting the blood in an unnatural way with the help of synthetic substances can have a lot of adverse reactions, especially in children. In cases where the platelet level exceeds the maximum permissible values by 2-3 times - emergency therapy is carried out. To do this, use the procedure of filtering your own blood (hemodialysis). Baby blood is passed through a special apparatus that is able to hold back platelets, leaving the main composition of the blood the same. The procedure is performed under sterile conditions, and is also capable of saving lives in critical situations. As a prevention, as well as an auxiliary treatment method, diet is used. Children are not recommended a high content of carbohydrates, as well as fats of animal origin (with the exception of dairy products). The diet consists of cereals, vegetables (fresh and steamed), kissels from fresh fruits and berries. Good help to reduce platelets. Products such as: Make the blood not so concentrated drinking plenty of water will help. The child is offered to drink boiled water in a volume of 30 ml per kilogram of weight. You should drink in small sips, but often (every 15-20 minutes). Treatment and Prevention The treatment in this case is aimed at stimulating the production of platelets. If the therapeutic effect of the produced medicinal treatment is not observed, more radical measures are required. The most productive treatment is: - injection of donor platelets into the body, the presence of which can stimulate the hormonal system and karyocytes to synthesize their own cells, - bone marrow transplantation, which is expensive but the most effective procedure in the case when the problem of low platelets is directly related to the bone marrow cells. In order to prevent children who have the prerequisites for the development of thrombocytopenia, it is recommended to adhere to a special diet that makes the blood more saturated: - red meat - a fish, - offal: liver, kidneys, hearts. Thus, in childhood, imbalance of platelet cells may be observed against the background of hormonal changes, which is life threatening. Often, to determine the presence of this problem is visually impossible. Reducing the risk of developing serious pathologies and health problems will allow a routine examination by a pediatrician, as well as blood donation at least 1 time per trimester. With anxiety symptoms, the frequency of analysis can be increased. What are they needed for? Such blood cells play a large role in the blood coagulation system. Thanks to them, in case of damage to the blood vessels, the bleeding stops and the site of damage closes with a blood clot. In addition, on its surface, blood plates carry various biologically active compounds, immune complexes, as well as clotting factors. Blood plates are formed in the red bone marrow, and after entering the bloodstream, their lifetimes range from two to ten days, after which the platelets are transferred to the spleen, where they are destroyed. At this time, new cells continuously enter the bloodstream. Thus, the platelets are constantly updated, and their total number remains at about the same level. How is platelet count determined? Evaluation of the number of platelets carried out by clinical blood analysis. All doctors call this study the most important in childhood. To conduct it, blood from a child can be taken from a finger, as well as from a vein. The smallest children can take blood from the heels. Platelets are counted in a liter of blood, denoting them in the analysis form at 10 9 / l. A blood test may be prescribed to the child according to plan, even if he does not have any complaints. An unplanned direction for analysis is given to children who have bleeding gums, episodes of bleeding from the nose, non-stop bleeding after a cut, the frequent appearance of bruises, complaints of fatigue, pain in the limbs and other ailments. Also, platelets are required to check for anemia, spleen enlargement, leukemia, viral infections, systemic and other diseases that can provoke a change in the number of these blood cells. What affects their number? The number of platelets depends on: - Age of the child. Newborns have more of them than babies older than a month and older children. - Presence of diseases as well as medication. - Physical activity. For some time after it, the number of blood platelets becomes higher. - Baby food. There are products that can thin the blood, and some foods stimulate the formation of blood cells. - Time of day and time of year. During the day, fluctuations in the number of platelets within 10% are observed. In order for the blood test result to be reliable, and the number of platelets in the analysis form correspond to the actual number of cells in the blood, it is important to follow these recommendations: - The child should not eat before taking blood. If a blood sample is taken from a baby, the interval between feeding and handling should be 2 hours. - Before analysis, the child should not have emotional or physical stress. It is also important and correct to choose clothes for the child to avoid hypothermia. It is also not recommended to donate blood immediately after the child entered the clinic from the street. Let him rest for 10-15 minutes in the corridor and calm down. - If your child is already undergoing any treatment, be sure to warn the doctor that you are taking medications, as they may affect the overall picture. What are platelets? Life in platelets is short (up to 10 days), but very responsible. Indeed, thanks to these very cells, a person does not bleed out with wounds, the wounds themselves do not remain forever, but heal, and besides, platelets help our body firmly to defend against various viruses and bacteria. Platelets are formed in the bone marrow, and are destroyed in the liver and spleen. This process is continuous - some cells are just being born, others are dying - so that in the human body there are platelets young and old, mature and already incapacitated. Therefore, their quantitative composition in the blood is not constant - plus or minus 10% of the norm. It would seem that if these are such important cells, then is it worth it to operate on such a concept as the norm? Yes, it is worth it: an excess of platelets is just as dangerous for health as their deficiency. The body should maintain a balance between the newly formed cells and those that have already served their own. And this balance is surely checked by pediatricians in their young patients. What is the normal amount of platelets in children? The number of platelets depends on many factors, and above all on the child’s age and physical condition at the time of the analysis. Since this indicator is “floating”, its normative boundaries are defined quite widely. Platelet norm in children of different ages: In order to determine the number of platelets in the blood of a child, he needs to pass a complete blood count. It is recommended to do this at least once a year in order to timely detect abnormalities that may indicate the presence of health problems. If the platelet count is below normal Thrombocytopenia is the name of a disease in which there is a reduced level of platelets in the blood. In children, it can be provoked by the following reasons: - anemia, which is caused by a lack of vitamins, - thyroid abnormalities, - infectious diseases (for example, measles and rubella), - taking medication, in particular antibiotics, - prolonged bleeding. Thrombocytopenia is recorded in every second premature baby. It can suffer from babies whose birth was accompanied by asphyxiation, children with insufficient weight and with weak immunity. Alert doctors and parents should cause the following facts: - for injuries, the wounds do not heal for a very long time, it is difficult to stop the blood for cuts and abrasions, - there are a lot of bruises on the child’s body for no apparent reason - blood often flows from the nose - bleeding gums. This suggests a very low blood clotting, which is caused by a lack of platelets. If the cause of thrombocytopenia is a serious disease in which the level of platelets is reduced significantly, then the child is shown additional diagnostics and subsequent treatment in the hospital. If the deficiency of blood cells is not critical, then you can fill it yourself. If the platelet count is above normal If the number of platelets deviates from the norm in the direction of increasing, then we are talking about thrombocytosis - a phenomenon in which the blood becomes very thick and viscous. There are several reasons for this. At the same time they are divided into two groups - primary and secondary. Thrombocytosis is called primary, in which too many platelets in the blood are caused by pathological processes in the bone marrow, that is, where platelets are produced. Tumor cell degeneration leads to their excessive formation. Secondary thrombocytosis is noted as a consequence of various diseases. Most often, thrombocytosis is caused by: - diseases of the liver and kidneys, - oncological diseases, - viral infections - tuberculosis or pneumonia - iron deficiency in the body - damage to internal organs or surgery, - taking some painkillers, antifungals, and anti-inflammatory drugs, - excessive exercise. The children's body is constantly in development, which means that it is constantly experiencing physiological stress. In such circumstances, the chances of a threat of thrombocytosis are very high. It is a threat, because this disease is dangerous with potential blockage of blood vessels with blood clots. How to normalize the level of platelets in the blood If an excess or deficiency of platelets is not caused by critical reasons requiring urgent medical intervention, then the balance of these blood cells can be restored at home. The most effective method - to make the right diet. With a lack of platelets, foods rich in vitamins B, K and A, iron, folic acid and taurine will be helpful. The list of the daily children's menu must include: - red meat and fish cooked in any way - beef liver - hard cheese - eggs, especially yolks, - vegetables and greens, especially white cabbage, spinach, dill and asparagus. Extremely useful for thrombocytopenia will be the juice of black chokeberry and pomegranate, hips and nettle. Contraindicated in the deficiency of platelets products that contribute to blood thinning. This list includes: - olive oil, - some berries, in particular, blueberries, raspberries, When thrombocytopenia is recommended to abandon aspirin. With thrombocytosis, that is, an excess of platelets, you should pay attention to foods rich in antioxidants, iodine and vitamin C, which, conversely, contribute to a better blood thinning and prevent blood clots. So, on the dinner table, behind which sits a child with a high content of platelets in the blood, should be: - olive or flaxseed oil, - onion and garlic, Of the drinks will be useful sea buckthorn and orange juices, green tea, cocoa. In general, when there is an excess of platelets, it is recommended to drink as much liquid as possible - this will help to reduce blood density. - from fruits and berries - chokeberry, mango, pomegranate, bananas, - from everyday food - carbohydrate and fatty foods, as well as animal fats. Do not drink carbonated beverages. Remember that proper nutrition - not a panacea, if the cause of an imbalance of platelets in the blood of a child has become some serious illness. Do not neglect the advice and recommendations of the doctor. As for preventive measures, fresh air and physical education - this is what a child should make friends with in order to have maximum chances for healthy blood. General blood count indicators The rate of platelets in the blood of children is calculated by age in thousands of units: - newborns - 100 - 420, - 10 days of life and up to a year —150 - 390, - after the year 180 - 380, - у девочек-подростков в период начала менструального кровотечения норма plt составляет 75 – 220. After 16 years of age, normal platelet counts become lower and reach an adult level of 180–360 thousand, and the transcript will be based on this value. A blood test is taken on an empty stomach from a finger or toe for newborns. The results table includes red blood cells, ESR, hemoglobin and others. The indicators are reviewed in a complex, suggesting why deviations occur. Emotional and physical exertion, including hypothermia, can change the number of platelets. For verification, do a reanalysis after five days. Usually, the decoding of the analysis is ready the next day, but in a hospital, after a few hours. Most often, an increase in platelets in a child is insignificant, and doctors choose expectant tactics. In childhood, coagulation abnormalities are often detected in the blood test, as indicated by frequent bleeding from the nose and gums, spontaneous bruises on the body. When complaining of weakness and dizziness, leaking hands and feet, the doctor also examines the content of platelets in the blood of a child. Indications for blood tests are serious diseases: - lupus erythematosus and other autoimmune processes - Iron-deficiency anemia, - blood cancer, - viral infections - an enlarged spleen in which the processed blood cells accumulate. Since the blood cells are constantly updated, the increased blood platelets in a child are observed in the case of: - Overproduction of blood plates in the bone marrow. - Cell utilization problems in the spleen. - Disturbance of circulation during physical and emotional stress, which is typical for young children. The rate of platelets in children can be violated, regardless of gender and age, but the diagnosis of thrombocytosis is established when the excess number reaches 800 thousand / liter or more. This condition requires mandatory testing. Causes of thrombocytosis There are several types of thrombocytosis caused by a violation of the mechanism of blood formation: - clonal is associated with the production of defective cells due to bone marrow tumors, which can greatly increase platelet counts, - Primary caused by the growth of areas of red bone marrow and excessive production of cells, which is provoked by genetic diseases or occurs in myeloid leukemia and erythremia. At the same time change the size and shape of cells. The reasons for the increase in platelets can be grouped as follows: - Reactive thrombocytosis develops against the background of a previous disease (pneumonia, infections of the upper respiratory or urinary tract, iron deficiency anemia, surgery, bleeding or burns). Thrombotic and hemorrhagic symptoms are absent. Secondly, the platelets increase in children on the background of infection, iron deficiency, taking certain medications, chronic inflammation or tissue damage, cancer, removal of the spleen. Practically in 80% this condition passes gently for the child, moderate signs develop in 7% of cases, and treatment is required in 3%. - Essential, or primary, thrombocytosis is manifested in the appearance of multiple hematomas on the body, prone to headaches. The disorder is usually familial and is associated with a gene mutation. The disease is rare: one in a million. The blood platelets of a child increase to levels above 600,000 in μL, and the spleen is enlarged. A prolonged and inexplicable increase in the number of cells, the appearance of deformed or abnormal elements require research, since they can affect blood clotting. The most common symptoms are headache, dizziness, change in vision, numbness or burning pain in the arms and legs. The level of platelets rises against the background of several conditions: - After removal of the spleen, the destruction of old cells is slowed down, and the formation of new ones leads to their accumulation. The body produces antiplatelet antibodies, trying to reduce the production of platelets. - Inflammation in the body enhances the production of the hormone thrombopoietin, which stimulates the creation of blood cells to suppress the inflammatory process. Interleukins are constantly being produced, and platelets are increased in response. A blood test breakdown reports inflammation. - Malignant tumors produce substances that stimulate megakaryocytes in the bone marrow to produce cells, which is typical of lung sarcoma, kidney hypernefromes, lymphogranulomatosis. If the body experiences frequent blood loss in case of intestinal ulcers, an increased level of platelets is recorded. Sometimes a change in blood composition becomes a sign of a lack of folic acid. Cause an increased number of platelets in the blood are capable of tuberculosis, anemia, rheumatism, osteomyelitis, bone fractures, amyloidosis. The drugs have side effects associated with thrombocytosis: "Epinephrine", "Adrenaline", "Vincristina", corticosteroids. Factors due to which platelets are elevated in the blood can be divided into several groups: - Benign causes. In 13% of cases of thrombocytosis in childhood. The reasons for the increase are associated with medication, blood loss or surgery. Infections affect blood composition. - Severe illnesses. Diseases of the connective tissue, kidney, liver tumors, some types of anemia, polycythemia, inflammatory bowel disease or leukemia affect the indicators. The rate of platelets is exceeded against the background of a mental disorder that stimulates the bone marrow to produce these cells. - Colds and infections. Inflammation or the acute phase of an infectious disease always increase the level of platelets, while the pathogen can be a bacterium, a virus, a fungus, and even a parasite. - Violation of blood formation. Iron deficiency is accompanied by thrombocytosis, and the general condition is characterized by weakness, lack of appetite and irritability. Idiopathic thrombocytopenic purpura usually resolves spontaneously in 80% of cases over half a year. - Lifestyle. Dehydration can occur when carbonated drinks are used instead of water. This condition artificially increases the concentration of blood cells. Platelets in newborns Low platelet count in a child is one of the most common hematological problems that occur in the neonatal period. A distinctive feature of the pathology are lesions of the skin and mucous membranes, and in newborns - petechiae, purpura and the risk of intracranial hemorrhage. The condition in which the analysis shows little platelet count in the baby in the first days of life is called neonatal thrombocytopenia. Pathology is determined by a decrease in the number of platelets to 150 thousand per liter of blood. Particularly often, reduced platelets are detected in intensive care units in approximately 22% of cases and in 1 to 5% of full-term births of children. The complex process of cell production is presented in four main stages: - Development of thrombopoietic factors. - Formation of the megakaryocyte by progenitor cells. - Differentiation and maturation of cells in the process of endomitosis and cytoplasmic changes. - The release of platelets into the circulating blood. These stages are the same for both adults and infants. However, studies have recognized significant biological differences between the thrombopoiesis of children and adults. The concentration of factors in healthy, full-term and premature infants is higher than in healthy adults. At the same time, immediately after birth, children have reduced the average volume of platelets, but progenitor cells profile faster. Congenital thrombocytopenia is a pathology associated with mutations of genes that reduce the number of platelets. The average volume of platelets below the norm is recorded in premature babies born to women who experienced severe toxicosis and preeclampsia during pregnancy. The decrease occurs in intrauterine heart disease. Most often, the child's platelets are lowered due to active destruction as a result of an autoimmune disease or severe intoxication. Platelets stop bleeding by sticking to the affected tissue and begin the healing process. Pediatricians rarely prescribe a survey when platelets are elevated in infants after infectious diseases, colds and rotavirus. A number that exceeds 1000 units and above is critical. The remaining increase is considered the body's response to infection and inflammation. The absence of complaints about the general condition indicates the absence of a serious pathology. The platelets of a child who has been taking antihistamines and antibiotics for a long time are slightly reduced. Coagulability is influenced by viral infections, tuberculosis, anemia, cancer and drug allergies. Weakness, pallor indicate a decrease in platelets, which in the long run threatens the development of hemophilia. A child with a similar diagnosis should be protected from falls, cuts and bruises. The condition of platelets in children under one year is monitored after illness and during the vaccination period. The effects of rubella and measles are associated with a blood clotting disorder. With prematurity or negative maternal rhesus, low platelet levels in the baby’s blood can be caused by the work of maternal antibodies immediately after birth. This condition is corrected with drugs. The role of platelets in the child's body Platelets are blood plates that have no nuclei. They are formed in the bone marrow, partially located in the spleen, and the rest of them enters the circulatory system. On average, the life span of these elements is 10 days. They play an important role in the child’s body by performing the following functions: - in case of damage to blood vessels, platelets are attached to their walls, gluing together with each other, thereby preventing blood loss, - in obtaining various injuries, they contribute to the restoration of damaged collagen cells, - Platelets serve as a barrier to the entry of pathogenic bacteria into the blood. The quantitative and qualitative composition of blood varies with the development of the child's body, therefore, the indicators of the normal platelet count vary depending on the age category. Children from birth should be regularly tested for blood, so that the number of platelets is always under control. Newborns for this purpose take blood from the umbilical cord or finger. If a child has spontaneous bleeding, hematomas appear in different parts of the skin, blood inclusions are present in the urine and feces, and blood must be immediately donated for analysis. Such phenomena may indicate fragility of blood vessels, which is the first sign of insufficient platelet production. Table with norms in children of different ages The number of platelets is determined by the results of a general clinical blood test. This indicator is one of its most important parameters, because on the basis of its value, the work of the coagulation system is assessed and the ability of the child’s body to cope with heavy bleeding is determined. Normal levels of platelet levels in infants are shown in the table by month, from the moment of his birth until the age of one year, since it is during this period that it is extremely important to track these values. In the table, the number of platelets is indicated in thousands per 1 cubic milliliter of blood: - for babies from 1 to 2 years old, this indicator should be within the interval of 170-390, - from 2 to 3 years - 170-360, - in children 3-5 years old, the average platelet volume (MPV) is normally 165-375x109 / l, - in children 5-7 years old - 160-380, - for primary school children of 7-10 years old - 160-375, - in children from 10 to 13 years old - 165-390, - in a teenage child of 13–15 years old, the average volume of platelets should be in the range of 160–400x109 / l. For girls at puberty set special indicators. So, in the first days of menstruation, the number of platelets can vary in the range of 75-220. The average volume of platelets in children 15-16 years of age is set in the range of 160-360x109 / l, 16-18-year-old - 160-400. Increased platelet volume In some cases, the MPV is significantly higher than normal values. This phenomenon is called thrombocytosis and can provoke serious consequences. The form of this pathology is relative, when the indicators exceed the upper threshold of the norm by no more than 200 thousand units, and critical. In the latter case, we are talking about twice exceeding the platelet-level limit. Causes of elevation and symptoms Often, babies aged 6 months have an increased MPV. Not always such a phenomenon indicates the development of any pathological process. As a rule, an increase in this indicator in infants is associated with a sharp increase in physical activity, since by six months babies are already beginning to actively roll over and try to sit down. In many babies at the age of 6 months, the average platelet counts in the blood are higher than normal values due to the eruption of milk teeth. The causes of thrombocytosis in children are primary, associated with pathological processes in the bone marrow, and secondary. The latter factors include: - insufficient drinking water - taking a number of drugs, such as corticosteroids, - non-compliance with the doctor’s recommendations before blood donation, - infectious diseases, - iron deficiency - damage to internal organs or surgery, - dysfunction of the spleen, - inflammatory processes in the body, - liver disease, - broken bones, - previous anemia. Thrombocytosis is characterized by a series of manifestations. These include: - the occurrence of hematomas with leakage of blood in the tissue (this phenomenon is called purpura), - single hemorrhages in the form of points on the skin (petechiae), - bleeding in the digestive system, from the nose and mouth, - bleeding gums, - blood inclusions in feces, urine and vomiting, - an increase in the size of the spleen, - intracranial hemorrhage, - hypersensitivity of the fingertips, - itchy skin rash, - pain in the kidneys, - violation of urination. Ways to reduce MPV To identify the causes of increased MPV shows a number of surveys. However, even before the results are obtained, the child is prescribed medication drugs of a group of cytostatics, which lower the level of blood plates. If during the examination a significant increase in the number of platelets is detected, a procedure for platelet apheresis can be prescribed to remove excess cells from the blood. Further cure depends on the factors that provoked an increase in MPV and the form of the disease. In addition to drug therapy, a mandatory correction of the baby’s nutrition is shown. Products such as bananas, pomegranates, mangoes, walnuts, and wild rose should be excluded from the child’s diet. Significantly lower the number of platelets in a child's baby's flaxseed and olive oil, beets, tomatoes, chocolate, fruits rich in vitamin C, green tea, fresh berries, freshly squeezed orange or sea buckthorn juice. Correction of the baby's nutrition is necessarily agreed with the doctor. If MPV is increased in infants, pediatricians recommend feeding them with mother's milk as long as possible. To lower the level of platelets can be pre-agreed with the doctor adjustment of nutrition Dietary products of one-year-old and older age groups should include dairy products, red meat, seafood, vegetables and fruits in the diet. You also need to ensure that the child consumes sufficient amount of drinking water, especially during the summer season. Low platelet count A decrease in the number of platelets in the bloodstream is called thrombocytopenia. Such a phenomenon may indicate the development of dangerous pathologies in the children's body. Therefore, it is extremely important to monitor the condition of the baby, to conduct timely blood tests, and if you identify the first signs of a decrease in MPV, immediately contact a pediatrician. Causes of decline and symptoms Among the causes of thrombocytopenia are a number of factors. These include: - unhealthy diet - vitamin B deficiency, - cancer of the bone marrow or blood, - lupus erythematosus, - heavy metal poisoning - Verlgof disease. Attenuated MPV is indicated by such phenomena as bleeding gums and poor blood clotting. Hematomas, which appear on the body of the child for no reason, also indicate a decrease in the number of platelets in the blood. MPV enhancement methods In cases where a decrease in MPV is not associated with severe pathologies, platelet count can be normalized with a special diet. The following products effectively increase this indicator: - red meat and fish, - chicken eggs, especially yolks, - liver, mostly beef, - bananas, apples, - White cabbage, - dill, asparagus, - pomegranate juice, - отвары из шиповника и черноплодной рябины. При низком MPV из детского рациона в обязательном порядке исключаются шоколад, имбирь, малина и черника. Parents should remember that platelet counts can be adjusted using nutrition only in cases where their increase or decrease is not associated with any pathological processes in the body of the baby. Interpretation in total with other indicators of blood Complete blood count is a universal procedure for assessing the state of children's health. However, it must be borne in mind that the interpretation of its results should be done by an experienced doctor. A qualified specialist is not only able to correctly estimate how many platelets in a child’s blood have been detected, but also to correctly interpret this indicator based on the results of other blood components. Only on the basis of the overall clinical picture, the doctor is determined with the methods of treatment and recommendations for the normalization and maintenance of the required number of platelets in the child’s blood.
Note that you can reverse steps 1 and 2 and still come to the same solution. If you multiply 1 by 100 and then divide the result by 280, you will still come to 0.36! 1 / 280 = 0.36% We encourage you to check out our introduction to percentage page for a little recap of what percentage is. You can also learn about fractions in our fractions section of the website. Sometimes, you may want to express a fraction in the form of a percentage, or vice-versa. This page will cover the former case. Luckily for us, this problem only requires a bit of multiplication and division. We recommend that you use a calculator, but solving these problems by hand or in your head is possibly too! Here's how we discovered that 1 / 280 = 0.36% : Fractions are commonly used in everyday life. If you are splitting a bill or trying to score a test, you will often describe the problem using fractions. Sometimes, you may want to express the fraction as a percentage. |0.36%||1 / 280||0.0| A percentage is a number out of 100, so we need to make our denominator 100! If the original denominator is 280, we need to solve for how we can make the denominator 100. To convert this fraction, we would divide 100 by 280, which gives us 0.36. Now, we multiply 0.36 by 1, our original numerator, which is equal to 0.36 0.36 / 100 = 0.36% Remember, a percentage is any number out of 100. If we can balance 1 / 280 with a new denominator of 100, we can find the percentage of that fraction! Similar Percentage ProblemsWhat is 126/114 as a percentage? What is 43/151 as a percentage? What is 155/122 as a percentage? What is 65/10 as a percentage? What is 28/216 as a percentage? What is 23/39 as a percentage? What is 136/73 as a percentage? What is 180/68 as a percentage? What is 58/103 as a percentage? What is 145/17 as a percentage? Random articlesAbsolute Value Function Multiplying Decimals Multiplication Introduction Variables, Basics of Equations Mode, Median Horizontal Line Test Polynomials, Adding-Subtracting
|Part of a series on| Rail transport is a means of transferring of passengers and goods on wheeled vehicles running on rails, also known as tracks. It is also commonly referred to as train transport. In contrast to road transport, where vehicles run on a prepared flat surface, rail vehicles (rolling stock) are directionally guided by the tracks on which they run. Tracks usually consist of steel rails, installed on ties (sleepers) and ballast, on which the rolling stock, usually fitted with metal wheels, moves. Other variations are also possible, such as slab track, where the rails are fastened to a concrete foundation resting on a prepared subsurface. Rolling stock in a rail transport system generally encounters lower frictional resistance than road vehicles, so passenger and freight cars (carriages and wagons) can be coupled into longer trains. The operation is carried out by a railway company, providing transport between train stations or freight customer facilities. Power is provided by locomotives which either draw electric power from a railway electrification system or produce their own power, usually by diesel engines. Most tracks are accompanied by a signalling system. Railways are a safe land transport system when compared to other forms of transport.[Nb 1] Railway transport is capable of high levels of passenger and cargo utilization and energy efficiency, but is often less flexible and more capital-intensive than road transport, when lower traffic levels are considered. The oldest known, man/animal-hauled railways date back to the 6th century BC in Corinth, Greece. Rail transport then commenced in mid 16th century in Germany in form of horse-powered funiculars and wagonways. Modern rail transport commenced with the British development of the steam locomotives in the early 19th century. Thus the railway system in Great Britain is the oldest in the world. Built by George Stephenson and his son Robert's company Robert Stephenson and Company, the Locomotion No. 1 is the first steam locomotive to carry passengers on a public rail line, the Stockton and Darlington Railway in 1825. George also built the first public inter-city railway line in the world to use only the steam locomotives all the time, the Liverpool and Manchester Railway which opened in 1830. With steam engines, one could construct mainline railways, which were a key component of the Industrial Revolution. Also, railways reduced the costs of shipping, and allowed for fewer lost goods, compared with water transport, which faced occasional sinking of ships. The change from canals to railways allowed for "national markets" in which prices varied very little from city to city. The invention and development of the railway in the United Kingdom was one of the most important technological inventions of the 19th century. The world's first underground railway, the Metropolitan Railway (part of the London Underground), opened in 1863. In the 1880s, electrified trains were introduced, leading to electrification of tramways and rapid transit systems. Starting during the 1940s, the non-electrified railways in most countries had their steam locomotives replaced by diesel-electric locomotives, with the process being almost complete by the 2000s. During the 1960s, electrified high-speed railway systems were introduced in Japan and later in some other countries. Many countries are in process of replacing diesel locomotives with electric locomotives, mainly due to environmental concerns, a notable example being Switzerland, which has completely electrified its network. Other forms of guided ground transport outside the traditional railway definitions, such as monorail or maglev, have been tried but have seen limited use. Following decline after World War II due to competition from cars, rail transport has had a revival in recent decades due to road congestion and rising fuel prices, as well as governments investing in rail as a means of reducing CO2 emissions in the context of concerns about global warming. - 1 History - 2 Trains - 3 Infrastructure - 4 Operations - 5 Social, economical, and energetic aspects - 6 See also - 7 References - 8 Notes - 9 External links The history of rail transport began in the 6th century BC in Ancient Greece. It can be divided up into several discrete periods defined by the principal means of track material and motive power used. Evidence indicates that there was 6 to 8.5 km long Diolkos paved trackway, which transported boats across the Isthmus of Corinth in Greece from around 600 BC. Wheeled vehicles pulled by men and animals ran in grooves in limestone, which provided the track element, preventing the wagons from leaving the intended route. The Diolkos was in use for over 650 years, until at least the 1st century AD. The paved trackways were also later built in Roman Egypt. Wooden rails introduced Railways reappeared again only in the 14th century. In 1515, Cardinal Matthäus Lang wrote a description of the Reisszug, a funicular railway at the Hohensalzburg Castle in Austria. The line originally used wooden rails and a hemp haulage rope and was operated by human or animal power, through a treadwheel. The line still exists and is operational, although in updated form and is possibly the oldest operational railway. Wagonways (or tramways) using wooden rails, hauled by horses, started appearing in the 1550s to facilitate the transport of ore tubs to and from mines, and soon became very popular in Europe. Such an operation was illustrated in Germany in 1556 by Georgius Agricola (image right) in his work De re metallica. This line used "Hund" carts with unflanged wheels running on wooden planks and a vertical pin on the truck fitting into the gap between the planks to keep it going the right way. The miners called the wagons Hunde ("dogs") from the noise they made on the tracks. There are many references to their use in central Europe in the 16th century. Such a transport system was later used by German miners at Caldbeck, Cumbria, England, perhaps from the 1560s. A wagonway was built at Prescot, near Liverpool, sometime around 1600, possibly as early as 1594. Owned by Philip Layton, the line carried coal from a pit near Prescot Hall to a terminus about half a mile away. A funicular railway was also made at Broseley in Shropshire some time before 1604. This carried coal for James Clifford from his mines down to the river Severn to be loaded onto barges and carried to riverside towns. The Wollaton Wagonway, completed in 1604 by Huntingdon Beaumont, has sometimes erroneously been cited as the earliest British railway. It ran from Strelley to Wollaton near Nottingham. The Middleton Railway in Leeds, which was built in 1758, later became the world's oldest operational railway (other than funiculars), albeit now in an upgraded form. In 1764, the first railway in the America was built in Lewiston, New York. Metal rails introduced In the late 1760s, the Coalbrookdale Company began to fix plates of cast iron to the upper surface of the wooden rails. This allowed a variation of gauge to be used. At first only balloon loops could be used for turning, but later, movable points were taken into use that allowed for switching. A system was introduced in which unflanged wheels ran on L-shaped metal plates – these became known as plateways. John Curr, a Sheffield colliery manager, invented this flanged rail in 1787, though the exact date of this is disputed. The plate rail was taken up by Benjamin Outram for wagonways serving his canals, manufacturing them at his Butterley ironworks. In 1803, William Jessop opened the Surrey Iron Railway, a double track plateway, erroneously sometimes cited as world's first public railway, in south London. Meanwhile, William Jessop had earlier used a form of all-iron edge rail and flanged wheels successfully for an extension to the Charnwood Forest Canal at Nanpantan, Loughborough, Leicestershire in 1789. In 1790, Jessop and his partner Outram began to manufacture edge-rails. Jessop became a partner in the Butterley Company in 1790. The first public edgeway (thus also first public railway) built was Lake Lock Rail Road in 1796. Although the primary purpose of the line was to carry coal, it also carried passengers. These two systems of constructing iron railways, the "L" plate-rail and the smooth edge-rail, continued to exist side by side until well on into the early 19th century. The flanged wheel and edge-rail eventually proved its superiority and became the standard for railways. Cast iron used in rails proved unsatisfactory because it was brittle and broke under heavy loads. The wrought iron invented by John Birkinshaw in 1820 replaced cast iron. Wrought iron (usually simply referred to as "iron") was a ductile material that could undergo considerable deformation before breaking, making it more suitable for iron rails. But iron was expensive to produce until Henry Cort patented the puddling process in 1784. In 1783 Cort also patented the rolling process, which was 15 times faster at consolidating and shaping iron than hammering. These processes greatly lowered the cost of producing iron and rails. The next important development in iron production was hot blast developed by James Beaumont Neilson (patented 1828), which considerably reduced the amount of coke (fuel) or charcoal needed to produce pig iron. Wrought iron was a soft material that contained slag or dross. The softness and dross tended to make iron rails distort and delaminate and they lasted less than 10 years. Sometimes they lasted as little as one year under high traffic. All these developments in the production of iron eventually led to replacement of composite wood/iron rails with superior all iron rails. The introduction of the Bessemer process, enabling steel to be made inexpensively, led to the era of great expansion of railways that began in the late 1860s. Steel rails lasted several times longer than iron. Steel rails made heavier locomotives possible, allowing for longer trains and improving the productivity of railroads. The Bessemer process introduced nitrogen into the steel, which caused the steel to become brittle with age. The open hearth furnace began to replace the Bessemer process near the end of the 19th century, improving the quality of steel and further reducing costs. Thus steel completely replaced the use of iron in rails, thus becoming standard for all railways. The first passenger horsecar or tram, Swansea and Mumbles Railway was opened between Swansea and Mumbles in Wales in 1807. Horse remained preferable mode for tram transport even after arrival of steam engines, well till the end of the 19th century. The major reason was that the horse-cars were clean as compared to steam driven trams which caused smoke in city streets. Steam power introduced James Watt, a Scottish inventor and mechanical engineer, greatly improved the steam engine of Thomas Newcomen, hitherto used to pump water out of mines. Watt developed a reciprocating engine in 1769, capable of powering a wheel. Although the Watt engine powered cotton mills and a variety of machinery, it was a large stationary engine. It could not be otherwise: the state of boiler technology necessitated the use of low pressure steam acting upon a vacuum in the cylinder; this required a separate condenser and an air pump. Nevertheless, as the construction of boilers improved, Watt investigated the use of high-pressure steam acting directly upon a piston. This raised the possibility of a smaller engine, that might be used to power a vehicle and he patented a design for a steam locomotive in 1784. His employee William Murdoch produced a working model of a self-propelled steam carriage in that year. The first full-scale working railway steam locomotive was built in the United Kingdom in 1804 by Richard Trevithick, a British engineer born in Cornwall. This used high-pressure steam to drive the engine by one power stroke. The transmission system employed a large flywheel to even out the action of the piston rod. On 21 February 1804, the world's first steam-powered railway journey in the world took place when Trevithick's unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks, near Merthyr Tydfil in South Wales. Trevithick later demonstrated a locomotive operating upon a piece of circular rail track in Bloomsbury, London, the Catch Me Who Can, but never got beyond the experimental stage with railway locomotives, not least because his engines were too heavy for the cast-iron plateway track then in use. The first commercially successful steam locomotive was Matthew Murray's rack locomotive Salamanca built for the Middleton Railway in Leeds in 1812. This twin-cylinder locomotive was not heavy enough to break the edge-rails track and solved the problem of adhesion by a cog-wheel using teeth cast on the side of one of the rails. Thus it was also the first rack railway. This was followed in 1813 by the locomotive Puffing Billy built by Christopher Blackett and William Hedley for the Wylam Colliery Railway, the first successful locomotive running by adhesion only. This was accomplished by the distribution of weight between a number of wheels. Puffing Billy is now on display in the Science Museum in London, making it the oldest locomotive in existence. In 1814 George Stephenson, inspired by the early locomotives of Trevithick, Murray and Hedley, persuaded the manager of the Killingworth colliery where he worked to allow him to build a steam-powered machine. Stephenson played a pivotal role in the development and widespread adoption of the steam locomotive. His designs considerably improved on the work of the earlier pioneers. He built the locomotive Blücher, also a successful flanged-wheel adhesion locomotive. In 1825 he built the locomotive Locomotion for the Stockton and Darlington Railway in the north east of England, which became the first public steam railway in the world in 1825, although it used both horse power and steam power on different runs. In 1829, he built the locomotive Rocket, which entered in and won the Rainhill Trials. This success led to Stephenson establishing his company as the pre-eminent builder of steam locomotives for railways in Great Britain and Ireland, the United States, and much of Europe.:24–30 The first public railway which used only steam locomotives, all the time, was Liverpool and Manchester Railway, built in 1830. Steam power continued to be the dominant power system in railways around the world for more than a century. Electric power introduced The first known electric locomotive was built in 1837 by chemist Robert Davidson of Aberdeen in Scotland, and it was powered by galvanic cells (batteries). Thus it was also the earliest battery electric locomotive. Davidson later built a larger locomotive named Galvani, exhibited at the Royal Scottish Society of Arts Exhibition in 1841. The seven-ton vehicle had two direct-drive reluctance motors, with fixed electromagnets acting on iron bars attached to a wooden cylinder on each axle, and simple commutators. It hauled a load of six tons at four miles per hour (6 kilometers per hour) for a distance of one and a half miles (2 kilometers). It was tested on the Edinburgh and Glasgow Railway in September of the following year, but the limited power from batteries prevented its general use. It was destroyed by railway workers, who saw it as a threat to their job security. Werner von Siemens demonstrated an electric railway in 1879 in Berlin. The world's first electric tram line, Gross-Lichterfelde Tramway, opened in Lichterfelde near Berlin, Germany, in 1881. It was built by Siemens. The tram ran on 180 Volt DC, which was supplied by running rails. In 1891 the track was equipped with an overhead wire and the line was extended to Berlin-Lichterfelde West station. The Volk's Electric Railway opened in 1883 in Brighton, England. The railway is still operational, thus making it the oldest operational electric railway in the world. Also in 1883, Mödling and Hinterbrühl Tram opened near Vienna in Austria. It was the first tram line in the world in regular service powered from an overhead line. Five years later, in the U.S. electric trolleys were pioneered in 1888 on the Richmond Union Passenger Railway, using equipment designed by Frank J. Sprague. The first use of electrification on a main line was on a four-mile stretch of the Baltimore Belt Line of the Baltimore and Ohio Railroad (B&O) in 1895 connecting the main portion of the B&O to the new line to New York through a series of tunnels around the edges of Baltimore's downtown. Electricity quickly became the power supply of choice for subways, abetted by the Sprague's invention of multiple-unit train control in 1897. By the early 1900s most street railways were electrified. The London Underground, the world's oldest underground railway, opened in 1863, and it began operating electric services using a fourth rail system in 1890 on the City and South London Railway, now part of the London Underground Northern line. This was the first major railway to use electric traction. The world's first deep-level electric railway, it runs from the City of London, under the River Thames, to Stockwell in south London. The first practical AC electric locomotive was designed by Charles Brown, then working for Oerlikon, Zürich. In 1891, Brown had demonstrated long-distance power transmission, using three-phase AC, between a hydro-electric plant at Lauffen am Neckar and Frankfurt am Main West, a distance of 280 km. Using experience he had gained while working for Jean Heilmann on steam-electric locomotive designs, Brown observed that three-phase motors had a higher power-to-weight ratio than DC motors and, because of the absence of a commutator, were simpler to manufacture and maintain. However, they were much larger than the DC motors of the time and could not be mounted in underfloor bogies: they could only be carried within locomotive bodies. In 1894, Hungarian engineer Kálmán Kandó developed a new type 3-phase asynchronous electric drive motors and generators for electric locomotives. Kandó's early 1894 designs were first applied in a short three-phase AC tramway in Evian-les-Bains (France), which was constructed between 1896 and 1898. In 1896, Oerlikon installed the first commercial example of the system on the Lugano Tramway. Each 30-tonne locomotive had two 110 kW (150 hp) motors run by three-phase 750 V 40 Hz fed from double overhead lines. Three-phase motors run at constant speed and provide regenerative braking, and are well suited to steeply graded routes, and the first main-line three-phase locomotives were supplied by Brown (by then in partnership with Walter Boveri) in 1899 on the 40 km Burgdorf—Thun line, Switzerland. Italian railways were the first in the world to introduce electric traction for the entire length of a main line rather than just a short stretch. The 106 km Valtellina line was opened on 4 September 1902, designed by Kandó and a team from the Ganz works. The electrical system was three-phase at 3 kV 15 Hz. In 1918, Kandó invented and developed the rotary phase converter, enabling electric locomotives to use three-phase motors whilst supplied via a single overhead wire, carrying the simple industrial frequency (50 Hz) single phase AC of the high voltage national networks. An important contribution to the wider adoption of AC traction came from SNCF of France after World War II. The company conducted trials at AC 50 HZ, and established it as a standard. Following SNCF's successful trials, 50 HZ, now also called industrial frequency was adopted as standard for main-lines across the world. Diesel power introduced Earliest recorded examples of an internal combustion engine for railway use included a prototype designed by William Dent Priestman, which was examined by Sir William Thomson in 1888 who described it as a "[Priestman oil engine] mounted upon a truck which is worked on a temporary line of rails to show the adaptation of a petroleum engine for locomotive purposes.". In 1894, a 20 hp (15 kW) two axle machine built by Priestman Brothers was used on the Hull Docks. In 1906, Rudolf Diesel, Adolf Klose and the steam and diesel engine manufacturer Gebrüder Sulzer founded Diesel-Sulzer-Klose GmbH to manufacture diesel-powered locomotives. Sulzer had been manufacturing diesel engines since 1898. The Prussian State Railways ordered a diesel locomotive from the company in 1909. The world's first diesel-powered locomotive was operated in the summer of 1912 on the Winterthur–Romanshorn railway in Switzerland, but was not a commercial success. The locomotive weight was 95 tonnes and the power was 883 kW with a maximum speed of 100 km/h. Small numbers of prototype diesel locomotives were produced in a number of countries through the mid-1920s. A significant breakthrough occurred in 1914, when Hermann Lemp, a General Electric electrical engineer, developed and patented a reliable direct current electrical control system (subsequent improvements were also patented by Lemp). Lemp's design used a single lever to control both engine and generator in a coordinated fashion, and was the prototype for all diesel–electric locomotive control systems. In 1914, world's first functional diesel–electric railcars were produced for the Königlich-Sächsische Staatseisenbahnen (Royal Saxon State Railways) by Waggonfabrik Rastatt with electric equipment from Brown, Boveri & Cie and diesel engines from Swiss Sulzer AG. They were classified as DET 1 and DET 2 (de.wiki). The first regular use of diesel–electric locomotives was in switching (shunter) applications. General Electric produced several small switching locomotives in the 1930s (the famous "44-tonner" switcher was introduced in 1940) Westinghouse Electric and Baldwin collaborated to build switching locomotives starting in 1929. Although high-speed steam and diesel services were started before the 1960s in Europe, they were not much successful. The first electrified high-speed rail Tōkaidō Shinkansen was introduced in 1964 between Tokyo and Osaka in Japan. Since then high-speed rail transport, functioning at speeds up and above 300 km/h, has been built in Japan, Spain, France, Germany, Italy, the People's Republic of China, Taiwan (Republic of China), the United Kingdom, South Korea, Scandinavia, Belgium and the Netherlands. The construction of many of these lines has resulted in the dramatic decline of short haul flights and automotive traffic between connected cities, such as the London–Paris–Brussels corridor, Madrid–Barcelona, Milan–Rome–Naples, as well as many other major lines. High-speed trains normally operate on standard gauge tracks of continuously welded rail on grade-separated right-of-way that incorporates a large turning radius in its design. While high-speed rail is most often designed for passenger travel, some high-speed systems also offer freight service. A train is a connected series of rail vehicles that move along the track. Propulsion for the train is provided by a separate locomotive or from individual motors in self-propelled multiple units. Most trains carry a revenue load, although non-revenue cars exist for the railway's own use, such as for maintenance-of-way purposes. The engine driver (engineer in North America) controls the locomotive or other power cars, although people movers and some rapid transits are under automatic control. Traditionally, trains are pulled using a locomotive. This involves one or more powered vehicles being located at the front of the train, providing sufficient tractive force to haul the weight of the full train. This arrangement remains dominant for freight trains and is often used for passenger trains. A push-pull train has the end passenger car equipped with a driver's cab so that the engine driver can remotely control the locomotive. This allows one of the locomotive-hauled train's drawbacks to be removed, since the locomotive need not be moved to the front of the train each time the train changes direction. A railroad car is a vehicle used for the haulage of either passengers or freight. A multiple unit has powered wheels throughout the whole train. These are used for rapid transit and tram systems, as well as many both short- and long-haul passenger trains. A railcar is a single, self-powered car, and may be electrically-propelled or powered by a diesel engine. Multiple units have a driver's cab at each end of the unit, and were developed following the ability to build electric motors and engines small enough to fit under the coach. There are only a few freight multiple units, most of which are high-speed post trains. Steam locomotives are locomotives with a steam engine that provides adhesion. Coal, petroleum, or wood is burned in a firebox, boiling water in the boiler to create pressurized steam. The steam travels through the smokebox before leaving via the chimney or smoke stack. In the process, it powers a piston that transmits power directly through a connecting rod (US: main rod) and a crankpin (US: wristpin) on the driving wheel (US main driver) or to a crank on a driving axle. Steam locomotives have been phased out in most parts of the world for economical and safety reasons, although many are preserved in working order by heritage railways. Electric locomotives draw power from a stationary source via an overhead wire or third rail. Some also or instead use a battery. In locomotives that are powered by high voltage alternating current, a transformer in the locomotive converts the high voltage, low current power to low voltage, high current used in the traction motors that power the wheels. Modern locomotives may use three-phase AC induction motors or direct current motors. Under certain conditions, electric locomotives are the most powerful traction. They are also the cheapest to run and provide less noise and no local air pollution. However, they require high capital investments both for the overhead lines and the supporting infrastructure, as well as the generating station that is needed to produce electricity. Accordingly, electric traction is used on urban systems, lines with high traffic and for high-speed rail. Diesel locomotives use a diesel engine as the prime mover. The energy transmission may be either diesel-electric, diesel-mechanical or diesel-hydraulic but diesel-electric is dominant. Electro-diesel locomotives are built to run as diesel-electric on unelectrified sections and as electric locomotives on electrified sections. A passenger train travels between stations where passengers may embark and disembark. The oversight of the train is the duty of a guard/train manager/conductor. Passenger trains are part of public transport and often make up the stem of the service, with buses feeding to stations. Passenger trains provide long-distance intercity travel, daily commuter trips, or local urban transit services. They even include a diversity of vehicles, operating speeds, right-of-way requirements, and service frequency. Passenger trains usually can be divided into two operations: intercity railway and intracity transit. Whereas as intercity railway involve higher speeds, longer routes, and lower frequency (usually scheduled), intracity transit involves lower speeds, shorter routes, and higher frequency (especially during peak hours). Intercity trains are long-haul trains that operate with few stops between cities. Trains typically have amenities such as a dining car. Some lines also provide over-night services with sleeping cars. Some long-haul trains have been given a specific name. Regional trains are medium distance trains that connect cities with outlying, surrounding areas, or provide a regional service, making more stops and having lower speeds. Commuter trains serve suburbs of urban areas, providing a daily commuting service. Airport rail links provide quick access from city centres to airports. High-speed rail are special inter-city trains that operate at much higher speeds than conventional railways, the limit being regarded at 200 to 320 kilometres per hour (120 to 200 mph). High-speed trains are used mostly for long-haul service and most systems are in Western Europe and East Asia. The speed record is 574.8 km/h (357.2 mph), set by a modified French TGV. Magnetic levitation trains such as the Shanghai airport train use under-riding magnets which attract themselves upward towards the underside of a guideway and this line has achieved somewhat higher peak speeds in day-to-day operation than conventional high-speed railways, although only over short distances. Due to their heightened speeds, route alignments for high-speed rail tend to have shallower grades and broader curves than conventional railways. Their high kinetic energy translates to higher horsepower-to-ton ratios (e.g. 20 horsepower per short ton or 16 kilowatts per tonne); this allows trains to accelerate and maintain higher speeds and negotiate steep grades as momentum builds up and recovered in downgrades (reducing cut, fill, and tunnelling requirements). Since lateral forces act on curves, curvatures are designed with the highest possible radius. All these features are dramatically different from freight operations, thus justifying exclusive high-speed rail lines if it is economically feasible. Higher-speed rail services are intercity rail services that have top speeds higher than conventional intercity trains but the speeds are not as high as those in the high-speed rail services. These services are provided after improvements to the conventional rail infrastructure in order to support trains that can operate safely at higher speeds. Rapid transit is an intracity system built in large cities and has the highest capacity of any passenger transport system. It is usually grade-separated and commonly built underground or elevated. At street level, smaller trams can be used. Light rails are upgraded trams that have step-free access, their own right-of-way and sometimes sections underground. Monorail systems are elevated, medium-capacity systems. A people mover is a driverless, grade-separated train that serves only a few stations, as a shuttle. Due to the lack of uniformity of rapid transit systems, route alignment varies, with diverse rights-of-way (private land, side of road, street median) and geometric characteristics (sharp or broad curves, steep or gentle grades). For instance, the Chicago 'L' trains are designed with extremely short cars to negotiate the sharp curves in the Loop. New Jersey's PATH has similar-sized cars to accommodate curves in the trans-Hudson tunnels. San Francisco's BART operates large cars on its well-engineered routes. A freight train hauls cargo using freight cars specialized for the type of goods. Freight trains are very efficient, with economy of scale and high energy efficiency. However, their use can be reduced by lack of flexibility, if there is need of transshipment at both ends of the trip due to lack of tracks to the points of pick-up and delivery. Authorities often encourage the use of cargo rail transport due to its fame. Container trains have become the beta type in the US for bulk haulage. Containers can easily be transshipped to other modes, such as ships and trucks, using cranes. This has succeeded the boxcar (wagon-load), where the cargo had to be loaded and unloaded into the train manually. The intermodal containerization of cargo has revolutionized the supply chain logistics industry, reducing ship costs significantly. In Europe, the sliding wall wagon has largely superseded the ordinary covered wagons. Other types of cars include refrigerator cars, stock cars for livestock and autoracks for road vehicles. When rail is combined with road transport, a roadrailer will allow trailers to be driven onto the train, allowing for easy transition between road and rail. Bulk handling represents a key advantage for rail transport. Low or even zero transshipment costs combined with energy efficiency and low inventory costs allow trains to handle bulk much cheaper than by road. Typical bulk cargo includes coal, ore, grains and liquids. Bulk is transported in open-topped cars, hopper cars and tank cars. Right of way Railway tracks are laid upon land owned or leased by the railway company. Owing to the desirability of maintaining modest grades, rails will often be laid in circuitous routes in hilly or mountainous terrain. Route length and grade requirements can be reduced by the use of alternating cuttings, bridges and tunnels – all of which can greatly increase the capital expenditures required to develop a right of way, while significantly reducing operating costs and allowing higher speeds on longer radius curves. In densely urbanized areas, railways are sometimes laid in tunnels to minimize the effects on existing properties. Track consists of two parallel steel rails, anchored perpendicular to members called ties (sleepers) of timber, concrete, steel, or plastic to maintain a consistent distance apart, or rail gauge. Rail gauges are usually categorized as standard gauge (used on approximately 54.8% of the world's existing railway lines), broad gauge, and narrow gauge. In addition to the rail gauge, the tracks will be laid to conform with a Loading gauge which defines the maximum height and width for railway vehicles and their loads to ensure safe passage through bridges, tunnels and other structures. The track guides the conical, flanged wheels, keeping the cars on the track without active steering and therefore allowing trains to be much longer than road vehicles. The rails and ties are usually placed on a foundation made of compressed earth on top of which is placed a bed of ballast to distribute the load from the ties and to prevent the track from buckling as the ground settles over time under the weight of the vehicles passing above. The ballast also serves as a means of drainage. Some more modern track in special areas is attached by direct fixation without ballast. Track may be prefabricated or assembled in place. By welding rails together to form lengths of continuous welded rail, additional wear and tear on rolling stock caused by the small surface gap at the joints between rails can be counteracted; this also makes for a quieter ride (passenger trains). On curves the outer rail may be at a higher level than the inner rail. This is called superelevation or cant. This reduces the forces tending to displace the track and makes for a more comfortable ride for standing livestock and standing or seated passengers. A given amount of superelevation is most effective over a limited range of speeds. Turnouts, also known as points and switches, are the means of directing a train onto a diverging section of track. Laid similar to normal track, a point typically consists of a frog (common crossing), check rails and two switch rails. The switch rails may be moved left or right, under the control of the signalling system, to determine which path the train will follow. Spikes in wooden ties can loosen over time, but split and rotten ties may be individually replaced with new wooden ties or concrete substitutes. Concrete ties can also develop cracks or splits, and can also be replaced individually. Should the rails settle due to soil subsidence, they can be lifted by specialized machinery and additional ballast tamped under the ties to level the rails. Periodically, ballast must be removed and replaced with clean ballast to ensure adequate drainage. Culverts and other passages for water must be kept clear lest water is impounded by the trackbed, causing landslips. Where trackbeds are placed along rivers, additional protection is usually placed to prevent streambank erosion during times of high water. Bridges require inspection and maintenance, since they are subject to large surges of stress in a short period of time when a heavy train crosses. Train inspection systems The inspection of railway equipment is essential for the safe movement of trains. Many types of defect detectors are in use on the world's railroads. These devices utilize technologies that vary from a simplistic paddle and switch to infrared and laser scanning, and even ultrasonic audio analysis. Their use has avoided many rail accidents over the 70 years they have been used. Railway signalling is a system used to control railway traffic safely to prevent trains from colliding. Being guided by fixed rails which generate low friction, trains are uniquely susceptible to collision since they frequently operate at speeds that do not enable them to stop quickly or within the driver's sighting distance; road vehicles, which encounter a higher level of friction between their rubber tyres and the road surface, have much shorter braking distances. Most forms of train control involve movement authority being passed from those responsible for each section of a rail network to the train crew. Not all methods require the use of signals, and some systems are specific to single track railways. The signalling process is traditionally carried out in a signal box, a small building that houses the lever frame required for the signalman to operate switches and signal equipment. These are placed at various intervals along the route of a railway, controlling specified sections of track. More recent technological developments have made such operational doctrine superfluous, with the centralization of signalling operations to regional control rooms. This has been facilitated by the increased use of computers, allowing vast sections of track to be monitored from a single location. The common method of block signalling divides the track into zones guarded by combinations of block signals, operating rules, and automatic-control devices so that only one train may be in a block at any time. The electrification system provides electrical energy to the trains, so they can operate without a prime mover on board. This allows lower operating costs, but requires large capital investments along the lines. Mainline and tram systems normally have overhead wires, which hang from poles along the line. Grade-separated rapid transit sometimes use a ground third rail. Power may be fed as direct or alternating current. The most common DC voltages are 600 and 750 V for tram and rapid transit systems, and 1,500 and 3,000 V for mainlines. The two dominant AC systems are 15 kV AC and 25 kV AC. A railway station serves as an area where passengers can board and alight from trains. A goods station is a yard which is exclusively used for loading and unloading cargo. Large passenger stations have at least one building providing conveniences for passengers, such as purchasing tickets and food. Smaller stations typically only consist of a platform. Early stations were sometimes built with both passenger and goods facilities. Platforms are used to allow easy access to the trains, and are connected to each other via underpasses, footbridges and level crossings. Some large stations are built as culs-de-sac, with trains only operating out from one direction. Smaller stations normally serve local residential areas, and may have connection to feeder bus services. Large stations, in particular central stations, serve as the main public transport hub for the city, and have transfer available between rail services, and to rapid transit, tram or bus services. Since the 1980s, there has been an increasing trend to split up railway companies, with companies owning the rolling stock separated from those owning the infrastructure. This is particularly true in Europe, where this arrangement is required by the European Union. This has allowed open access by any train operator to any portion of the European railway network. In the UK, the railway track is state owned, with a public controlled body (Network Rail) running, maintaining and developing the track, while Train Operating Companies have run the trains since privatization in the 1990s. In the U.S., virtually all rail networks and infrastructure outside the Northeast Corridor are privately owned by freight lines. Passenger lines, primarily Amtrak, operate as tenants on the freight lines. Consequently, operations must be closely synchronized and coordinated between freight and passenger railroads, with passenger trains often being dispatched by the host freight railroad. Due to this shared system, both are regulated by the Federal Railroad Administration (FRA) and may follow the AREMA recommended practices for track work and AAR standards for vehicles. The main source of income for railway companies is from ticket revenue (for passenger transport) and shipment fees for cargo. Discounts and monthly passes are sometimes available for frequent travellers (e.g. season ticket and rail pass). Freight revenue may be sold per container slot or for a whole train. Sometimes, the shipper owns the cars and only rents the haulage. For passenger transport, advertisement income can be significant. Governments may choose to give subsidies to rail operation, since rail transport has fewer externalities than other dominant modes of transport. If the railway company is state-owned, the state may simply provide direct subsidies in exchange for increased production. If operations have been privatized, several options are available. Some countries have a system where the infrastructure is owned by a government agency or company – with open access to the tracks for any company that meets safety requirements. In such cases, the state may choose to provide the tracks free of charge, or for a fee that does not cover all costs. This is seen as analogous to the government providing free access to roads. For passenger operations, a direct subsidy may be paid to a public-owned operator, or public service obligation tender may be helt, and a time-limited contract awarded to the lowest bidder. Total EU rail subsidies amounted to €73 billion in 2005. Amtrak, the US passenger rail service, and Canada's Via Rail are private railroad companies chartered by their respective national governments. As private passenger services declined because of competition from automobiles and airlines, they became shareholders of Amtrak either with a cash entrance fee or relinquishing their locomotives and rolling stock. The government subsidizes Amtrak by supplying start-up capital and making up for losses at the end of the fiscal year.[page needed] Trains can travel at very high speed, but they are heavy, are unable to deviate from the track and require a great distance to stop. Possible accidents include derailment (jumping the track), a collision with another train or collision with automobiles, other vehicles or pedestrians at level crossings. The last accounts for the majority of rail accidents and casualties. The most important safety measures to prevent accidents are strict operating rules, e.g. railway signalling and gates or grade separation at crossings. Train whistles, bells or horns warn of the presence of a train, while trackside signals maintain the distances between trains. An important element in the safety of many high-speed inter-city networks such as Japan's Shinkansen is the fact that trains only run on dedicated railway lines, without level crossings. This effectively eliminates the potential for collision with automobiles, other vehicles or pedestrians, vastly reduces the likelihood of collision with other trains and helps ensure services remain timely. As in any infrastructure asset, railways must keep up with periodic inspection and maintenance in order to minimize effect of infrastructure failures that can disrupt freight revenue operations and passenger services. Because passengers are considered the most crucial cargo and usually operate at higher speeds, steeper grades, and higher capacity/frequency, their lines are especially important. Inspection practices include track geometry cars or walking inspection. Curve maintenance especially for transit services includes gauging, fastener tightening, and rail replacement. Rail corrugation is a common issue with transit systems due to the high number of light-axle, wheel passages which result in grinding of the wheel/rail interface. Since maintenance may overlap with operations, maintenance windows (nighttime hours, off-peak hours, altering train schedules or routes) must be closely followed. In addition, passenger safety during maintenance work (inter-track fencing, proper storage of materials, track work notices, hazards of equipment near states) must be regarded at all times. At times, maintenance access problems can emerge due to tunnels, elevated structures, and congested cityscapes. Here, specialized equipment or smaller versions of conventional maintenance gear are used. Unlike highways or road networks where capacity is disaggregated into unlinked trips over individual route segments, railway capacity is fundamentally considered a network system. As a result, many components are causes and effects of system disruptions. Maintenance must acknowledge the vast array of a route's performance (type of train service, origination/destination, seasonal impacts), line's capacity (length, terrain, number of tracks, types of train control), trains throughput (max speeds, acceleration/deceleration rates), and service features with shared passenger-freight tracks (sidings, terminal capacities, switching routes, and design type). Social, economical, and energetic aspects Rail transport is an energy-efficient but capital-intensive means of mechanized land transport. The tracks provide smooth and hard surfaces on which the wheels of the train can roll with a relatively low level of friction being generated. Moving a vehicle on and/or through a medium (land, sea, or air) requires that it overcomes resistance to its motion caused by friction. A land vehicle's total resistance (in pounds or Newtons) is a quadratic function of the vehicle's speed: - R denotes total resistance - a denotes initial constant resistance - b denotes velocity-related constant - c denotes constant that is function of shape, frontal area, and sides of vehicle - v denotes velocity - v2 denotes velocity, squared Essentially, resistance differs between vehicle's contact point and surface of roadway. Metal wheels on metal rails have a significant advantage of overcoming resistance compared to rubber-tyred wheels on any road surface (railway – 0.001g at 10 miles per hour (16 km/h) and 0.024g at 60 miles per hour (97 km/h); truck – 0.009g at 10 miles per hour (16 km/h) and 0.090 at 60 miles per hour (97 km/h)). In terms of cargo capacity combining speed and size being moved in a day: - human – can carry 100 pounds (45 kg) for 20 miles (32 km) per day, or 1 tmi/day (1.5 tkm/day) - horse and wheelbarrow – can carry 4 tmi/day (5.8 tkm/day) - horse cart on good pavement – can carry 10 tmi/day (14 tkm/day) - fully utility truck – can carry 20,000 tmi/day (29,000 tkm/day) - long-haul train – can carry 500,000 tmi/day (730,000 tkm/day) Most trains take 250–400 trucks off the road, thus making the road more safe. In terms of the horsepower to weight ratio, a slow-moving barge requires 0.2 horsepower per short ton (0.16 kW/t), a railway and pipeline requires 2.5 horsepower per short ton (2.1 kW/t), and truck requires 10 horsepower per short ton (8.2 kW/t). However, at higher speeds, a railway overcomes the barge and proves most economical. As an example, a typical modern wagon can hold up to 113 tonnes (125 short tons) of freight on two four-wheel bogies. The track distributes the weight of the train evenly, allowing significantly greater loads per axle and wheel than in road transport, leading to less wear and tear on the permanent way. This can save energy compared with other forms of transport, such as road transport, which depends on the friction between rubber tyres and the road. Trains have a small frontal area in relation to the load they are carrying, which reduces air resistance and thus energy usage. In addition, the presence of track guiding the wheels allows for very long trains to be pulled by one or a few engines and driven by a single operator, even around curves, which allows for economies of scale in both manpower and energy use; by contrast, in road transport, more than two articulations causes fishtailing and makes the vehicle unsafe. Considering only the energy spent to move the means of transport, and using the example of the urban area of Lisbon, electric trains seem to be on average 20 times more efficient than automobiles for transportation of passengers, if we consider energy spent per passenger-distance with similar occupation ratios. Considering an automobile with a consumption of around 6 l/100 km (47 mpg‑imp; 39 mpg‑US) of fuel, the average car in Europe has an occupancy of around 1.2 passengers per automobile (occupation ratio around 24%) and that one litre of fuel amounts to about 8.8 kWh (32 MJ), equating to an average of 441 Wh (1,590 kJ) per passenger-km. This compares to a modern train with an average occupancy of 20% and a consumption of about 8.5 kW⋅h/km (31 MJ/km; 13.7 kW⋅h/mi), equating to 21.5 Wh (77 kJ) per passenger-km, 20 times less than the automobile. Due to these benefits, rail transport is a major form of passenger and freight transport in many countries. It is ubiquitous in Europe, with an integrated network covering virtually the whole continent. In India, China, South Korea and Japan, many millions use trains as regular transport. In North America, freight rail transport is widespread and heavily used, but intercity passenger rail transport is relatively scarce outside the Northeast Corridor, due to increased preference of other modes, particularly automobiles and airplanes.[page needed] South Africa, northern Africa and Argentina have extensive rail networks, but some railways elsewhere in Africa and South America are isolated lines. Australia has a generally sparse network befitting its population density but has some areas with significant networks, especially in the southeast. In addition to the previously existing east-west transcontinental line in Australia, a line from north to south has been constructed. The highest railway in the world is the line to Lhasa, in Tibet, partly running over permafrost territory. Western Europe has the highest railway density in the world and many individual trains there operate through several countries despite technical and organizational differences in each national network. Social and economic benefits Railways are central to the formation of modernity and ideas of progress. Railways contribute to social vibrancy and economic competitiveness by transporting multitudes of customers and workers to city centres and inner suburbs. Hong Kong has recognized rail as "the backbone of the public transit system" and as such developed their franchised bus system and road infrastructure in comprehensive alignment with their rail services. China's large cities such as Beijing, Shanghai, and Guangzhou recognize rail transit lines as the framework and bus lines as the main body to their metropolitan transportation systems. The Japanese Shinkansen was built to meet the growing traffic demand in the "heart of Japan's industry and economy" situated on the Tokyo-Kobe line. During much of the 20th century, rail was an invaluable element of military mobilization, allowing for the quick and efficient transport of large numbers of reservists to their mustering-points, and infantry soldiers to the front lines. However, by the 21st century, rail transport – limited to locations on the same continent, and vulnerable to air attack – had largely been displaced by the adoption of aerial transport. Railways channel growth towards dense city agglomerations and along their arteries, as opposed to highway expansion, indicative of the U.S. transportation policy, which incents development of suburbs at the periphery, contributing to increased vehicle miles travelled, carbon emissions, development of greenfield spaces, and depletion of natural reserves. These arrangements revalue city spaces, local taxes, housing values, and promotion of mixed use development. Modern rail as economic development indicator European development economists have argued that the existence of modern rail infrastructure is a significant indicator of a country's economic advancement: this perspective is illustrated notably through the Basic Rail Transportation Infrastructure Index (known as BRTI Index). |Country||Subsidy in billions of Euros||Year| In total, Russian Roadways receives 90 billion roubles (around US$1,5 billion) annually from the government. Current subsidies for Amtrak (passenger rail) are around $1.4 billion. The rail freight industry does not receive subsidies. In 2014, total rail spending by China was $130 billion and is likely to remain at a similar rate for the rest of the country's next Five Year Period (2016–2020). The Indian railways are subsidized by around ₹400 billion (US$6.2 billion), of which around 60% goes to commuter rail and short-haul trips. It is the fourth largest railway network in the world comprising 119,630 kilometres (74,330 mi) of total track and 92,081 km (57,216 mi) of running track over a route of 66,687 km (41,437 mi) with 7,216 stations at the end of 2015-16. - List of rail transport topics - List of railroad-related periodicals - List of railway companies - List of transnational trains - List of transnational railways - List of railway industry occupations - Mega project - Mine railway - Passenger rail terminology - Rail transport by country - Rail transport in Walt Disney Parks and Resorts - Rail usage statistics by country - Railway systems engineering - Environmental design in rail transportation - International Union of Railways - Transport Revolution - British Rail - Verdelis, Nikolaos: "Le diolkos de L'Isthme", Bulletin de Correspondance Hellénique, Vol. 81 (1957), pp. 526–529 (526) - Cook, R. M.: "Archaic Greek Trade: Three Conjectures 1. The Diolkos", The Journal of Hellenic Studies, Vol. 99 (1979), pp. 152–155 (152) - Drijvers, J.W.: "Strabo VIII 2,1 (C335): Porthmeia and the Diolkos", Mnemosyne, Vol. 45 (1992), pp. 75–76 (75) - Raepsaet, G. & Tolley, M.: "Le Diolkos de l’Isthme à Corinthe: son tracé, son fonctionnement", Bulletin de Correspondance Hellénique, Vol. 117 (1993), pp. 233–261 (256) - Lewis, M. J. T., "Railways in the Greek and Roman world" Archived 21 July 2011 at the Wayback Machine., in Guy, A. / Rees, J. (eds), Early Railways. A Selection of Papers from the First International Early Railways Conference (2001), pp. 8–19 (11) - Fraser 1961, pp. 134 & 137 - Fraser 1961, pp. 134f. - "Der Reiszug: Part 1 – Presentation". Funimag. Retrieved 2009-04-22. - Kriechbaum, Reinhard (2004-05-15). "Die große Reise auf den Berg". der Tagespost (in German). Archived from the original on 2012-06-28. Retrieved 2009-04-22. - Georgius Agricola (trans Hoover), De re metallica (1913), p. 156. - Lee, Charles E. (1943). The Evolution of Railways (2 ed.). London: Railway Gazette. p. 16. OCLC 1591369. - Lewis, Early wooden railways, pp. 8–10. - Warren Allison, Samuel Murphy and Richard Smith, An Early Railway in the German Mines of Caldbeck in G. Boyes (ed.), Early Railways 4: Papers from the 4th International Early Railways Conference 2008 (Six Martlets, Sudbury, 2010), pp. 52–69. - Jones, Mark (2012). Lancashire Railways – The History of Steam. Newbury: Countryside Books. p. 5. ISBN 978 1 84674 298 9. - Peter King, The First Shropshire Railways in G. Boyes (ed.), Early Railways 4: Papers from the 4th International Early Railways Conference 2008 (Six Martlets, Sudbury, 2010), pp. 70–84. - "Huntingdon Beaumont's Wollaton to Strelley Waggonway". Nottingham Hidden History. Retrieved 23 August 2017. - Porter, Peter (1914). Landmarks of the Niagara Frontier. The Author. ISBN 0-665-78347-7. - Vaughan, A. (1997). Railwaymen, Politics and Money. London: John Murray. ISBN 0-7195-5746-1. - "Surrey Iron Railway 200th – 26th July 2003". Early Railways. Stephenson Locomotive Society. - Landes, David. S. (1969). The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. Cambridge, New York: Press Syndicate of the University of Cambridge. p. 91. ISBN 0-521-09418-6. - Landes 1969, pp. 92 - Wells, David A. (1891). Recent Economic Changes and Their Effect on Production and Distribution of Wealth and Well-Being of Society. New York: D. Appleton and Co. ISBN 0-543-72474-3. - Grübler, Arnulf (1990). The Rise and Fall of Infrastructures: Dynamics of Evolution and Technological Change in Transport (PDF). Heidelberg and New York: Physica-Verlag. - Fogel, Robert W. (1964). Railroads and American Economic Growth: Essays in Econometric History. Baltimore and London: The Johns Hopkins Press. ISBN 0-8018-1148-1. - Rosenberg, Nathan (1982). Inside the Black Box: Technology and Economics. Cambridge, New York: Cambridge University Press. p. 60. ISBN 0-521-27367-6. - "Early Days of Mumbles Railway". BBC. 15 February 2007. Retrieved 19 September 2007. - Gordon, W.J. (1910). Our Home Railways, volume one. London: Frederick Warne and Co. pp. 7–9. - "Richard Trevithick's steam locomotive". National Museum Wales. - "Steam train anniversary begins". BBC. 21 February 2004. Retrieved 2009-06-13. A south Wales town has begun months of celebrations to mark the 200th anniversary of the invention of the steam locomotive. Merthyr Tydfil was the location where, on 21 February 1804, Richard Trevithick took the world into the railway age when he set one of his high-pressure steam engines on a local iron master's tram rails - Hamilton Ellis (1968). The Pictorial Encyclopedia of Railways. The Hamlyn Publishing Group. p. 12. - Hamilton Ellis (1968). The Pictorial Encyclopedia of Railways. The Hamlyn Publishing Group. pp. 20–22. - Ellis, Hamilton (1968). The Pictorial Encyclopedia of Railways. Hamlyn Publishing Group. - Hamilton Ellis (1968). The Pictorial Encyclopedia of Railways. The Hamlyn Publishing Group. pp. 24–30. - Day, Lance; McNeil, Ian (1966). "Davidson, Robert". Biographical dictionary of the history of technology. London: Routledge. ISBN 978-0-415-06042-4. - Gordon, William (1910). "The Underground Electric". Our Home Railways. 2. London: Frederick Warne and Co. p. 156. - Renzo Pocaterra, Treni, De Agostini, 2003 - "Richmond Union Passenger Railway". IEEE History Center. Retrieved 2008-01-18. - "A brief history of the Underground". Transport for London.gov.uk. 15 October 2017. - Heilmann evaluated both AC and DC electric transmission for his locomotives, but eventually settled on a design based on Thomas Edison's DC system — Duffy (2003), p.39–41 - Duffy (2003), p. 129. - Andrew L. Simon (1998). Made in Hungary: Hungarian Contributions to Universal Culture. Simon Publications LLC. p. 264. ISBN 978-0-9665734-2-8. - Francis S. Wagner (1977). Hungarian Contributions to World Civilization. Alpha Publications. p. 67. ISBN 978-0-912404-04-2. - C.W. Kreidel (1904). Organ für die fortschritte des eisenbahnwesens in technischer beziehung. p. 315. - Elektrotechnische Zeitschrift: Beihefte, Volumes 11-23. VDE Verlag. 1904. p. 163. - L'Eclairage électrique, Volume 48. 1906. p. 554. - Duffy (2003), p. 120–121. - Hungarian Patent Office. "Kálmán Kandó (1869–1931)". www.mszh.hu. Retrieved 2008-08-10. - Michael C. Duffy (2003). Electric Railways 1880–1990. IET. p. 137. ISBN 978-0-85296-805-5. - Duffy (2003), p. 273. - "Motive power for British Railways" (PDF), The Engineer, vol. 202, p. 254, 24 April 1956 - The Electrical Review, 22: 474, 4 May 1888, A small double cylinder engine has been mounted upon a truck, which is worked on a temporary line of rails, in order to show the adaptation of a petroleum engine for locomotive purposes, on tramwaysMissing or empty - Diesel Railway Traction, Railway Gazette, 17: 25, 1963, In one sense a dock authority was the earliest user of an oil-engined locomotive, for it was at the Hull docks of the North Eastern Railway that the Priestman locomotive put in its short period of service in 1894Missing or empty - Churella 1998, p. 12. - Glatte, Wolfgang (1993). Deutsches Lok-Archiv: Diesellokomotiven 4. Auflage. Berlin: Transpress. ISBN 3-344-70767-1. - Lemp, Hermann. U.S. Patent No. 1,154,785, filed April 8, 1914, and issued September 28, 1915. Accessed via Google Patent Search at: US Patent #1,154,785 on February 8, 2007. - Pinkepank 1973, p. 409. - American Railway Engineering and Maintenance of Way Association Committee 24 – Education and Training. (2003). Practical Guide to Railway Engineering. AREMA, 2nd Ed. - "French train breaks speed record". CNN. Associated Press. 4 April 2007. Archived from the original on 7 April 2007. Retrieved 3 April 2007. - Fouquet, Helene & Viscousi, Gregory (3 April 2007). "French TGV Sets Record, Reaching 357 Miles an Hour (Update2)". Bloomberg L.P. Retrieved 8 May 2011. - "Environmental Issues". The Environmental Blog. 3 April 2007. Archived from the original on 11 January 2012. Retrieved 10 October 2010. - "The Inception of the English Railway Station". Architectural History. SAHGB Publications Limited. 4: 63–76. 1961. doi:10.2307/1568245. JSTOR 1568245. - "About Us". - "EU Technical Report 2007". - EuDaly, Kevin; et al. (2009). The Complete Book of North American Railroading. Minneapolis: Voyageur Press. ISBN 978-0-7603-2848-4. OCLC 209631579. - "Statistics database for transports". epp.eurostat.ec.europa.eu (statistical database). Eurostat, European Commission. 20 April 2014. Archived from the original on 3 June 2012. Retrieved 12 May 2014. - Vojtech Eksler, ed. (5 May 2013). "Intermediate report on the development of railway safety in the European Union 2013" (PDF). www.era.europa.eu (report). Safety Unit, European Railway Agency & European Union. p. 1. Retrieved 12 May 2014. - American Association of Railroads. "Railroad Fuel Efficiency Sets New Record". Retrieved 12 April 2009. - Publicada por João Pimentel Ferreira. "Carro ou comboio?". Veraveritas.eu. Retrieved 2015-01-03. - "Public Transportation Ridership Statistics". American Public Transportation Association. 2007. Archived from the original on 15 August 2007. Retrieved 10 September 2007. - "New height of world's railway born in Tibet". Xinhua News Agency. 24 August 2005. Retrieved 8 May 2011. - Schivelbusch, G. (1986) The Railway Journey: Industrialization and Perception of Time and Space in the 19th Century. Oxford: Berg. - Hong Kong Information Services Department of the Hong Kong SAR Government. Hong Kong 2009 - Hau H., Yun-feng G., Zhi-gang, L., Xiao-guang, Y. (2010). Effect of Integrated Multi-Modal Transit Information on Modal Shift. Intelligent Transportation Systems (ITSC), 2010 13th International IEEE Conference. 1753-1757pg. - Nishida, M., The Shinkansen High-Speed Rail Network of Japan. Proceedings of an IIASA Conference, June 27–30, 1977. - Lewandowski, Krzysztof (2015). "New coefficients of rail transport usage" (PDF). International Journal of Engineering and Innovative Technology (IJEIT). 5 (6): 89–91. ISSN 2277-3754. - Squires, G. Ed. (2002) Urban Sprawl: Causes, Consequences, & Policy Responses. The Urban Institute Press. - Puentes, R. (2008). A Bridge to Somewhere: Rethinking American Transportation for the 21st Century. Brookings Institution Metropolitan Policy Report: Blueprint for American Prosperity series report. - Firzli, M. Nicolas J. (1 July 2013). "Transportation Infrastructure and Country Attractiveness". Revue Analyse Financière. Paris. Retrieved 26 April 2014. - "ANNEX to Proposal for a Regulation of the European Parliament and of the Council amending Regulation (EC) No 1370/2007 concerning the opening of the market for domestic passenger transport services by rail" (PDF) (Commission Staff Working Document: Impact Assessment). Brussels: European Commission. 2013. pp. 6, 44, 45. Archived from the original (PDF) on 3 May 2013. 2008 data is not provided for Italy, so 2007 data is used instead - "German Railway Financing" (PDF). p. 2. Archived from the original (PDF) on 10 March 2016. - "Efficiency indicators of Railways in France" (PDF). Archived from the original (PDF) on 17 November 2015. - "The age of the train" (PDF). - "Facts and arguments in favour of Swiss public transport" (PDF). p. 24. Retrieved 3 July 2016. 6.3 billion Swiss francs - "Spanish railways battle profit loss with more investment". 17 September 2015. Retrieved 10 March 2016. - "GB rail industry financial information 2014-15" (PDF). Retrieved 9 March 2016. - "ProRail report 2015" (PDF). p. 30. - "The evolution of public funding to the rail sector in 5 European countries – a comparison" (PDF). p. 6. - "European rail study report" (PDF). pp. 44, 45. Archived from the original (PDF) on 3 May 2013. Includes both "Railway subsidies" and "Public Service Obligations". - "Government support for Russian Railways". - "FY15 Budget, Business Plan 2015" (PDF). - "Govt support for rail sector makes sense". - Praveen Patil. "Rail Budget And The Perpetuity Of Indian Socialism". - "Govt defends fare hike, says rail subsidy burden was too heavy". 22 June 2014. Retrieved 30 June 2016. |Wikiquote has quotations related to: Rail transport| |Wikimedia Commons has media related to Rail transport.| |Look up railway in Wiktionary, the free dictionary.| |Wikivoyage has a travel guide for Rail travel.|
In colloquial language, an average is a single number taken as representative of a list of numbers. Different concepts of average are used in different contexts. Often "average" refers to the arithmetic mean, the sum of the numbers divided by how many numbers are being averaged. In statistics, mean, median, and mode are all known as measures of central tendency, and in colloquial usage any of these might be called an average value. - 1 Calculation - 2 Summary of types - 3 Miscellaneous types - 4 Moving average - 5 History - 6 See also - 7 References - 8 External links The most common type of average is the arithmetic mean. If n numbers are given, each number denoted by ai (where i = 1,2, …, n), the arithmetic mean is the sum of the as divided by n or The arithmetic mean, often simply called the mean, of two numbers, such as 2 and 8, is obtained by finding a value A such that 2 + 8 = A + A. One may find that A = (2 + 8)/2 = 5. Switching the order of 2 and 8 to read 8 and 2 does not change the resulting value obtained for A. The mean 5 is not less than the minimum 2 nor greater than the maximum 8. If we increase the number of terms in the list to 2, 8, and 11, the arithmetic mean is found by solving for the value of A in the equation 2 + 8 + 11 = A + A + A. One finds that A = (2 + 8 + 11)/3 = 7. Along with the arithmetic mean above, the geometric mean and the harmonic mean are known collectively as the Pythagorean means. The geometric mean of n positive numbers is obtained by multiplying them all together and then taking the nth root. In algebraic terms, the geometric mean of a1, a2, …, an is defined as Example: Geometric mean of 2 and 8 is One example where the harmonic mean is useful is when examining the speed for a number of fixed-distance trips. For example, if the speed for going from point A to B was 60 km/h, and the speed for returning from B to A was 40 km/h, then the harmonic mean speed is given by Inequality concerning AM, GM, and HM A well known inequality concerning arithmetic, geometric, and harmonic means for any set of positive numbers is It is easy to remember noting that the alphabetical order of the letters A, G, and H is preserved in the inequality. See Inequality of arithmetic and geometric means. Thus for the above harmonic mean example: AM = 50, GM ≈ 49, and HM = 48 km/h. The mode, the median, and the mid-range are often used in addition to the mean as estimates of central tendency in descriptive statistics. These can all be seen as minimizing variation by some measure; see Central tendency § Solutions to variational problems. |Arithmetic mean||Sum of values of a data set divided by number of values:||(1+2+2+3+4+7+9) / 7||4| |Median||Middle value separating the greater and lesser halves of a data set||1, 2, 2, 3, 4, 7, 9||3| |Mode||Most frequent value in a data set||1, 2, 2, 3, 4, 7, 9||2| The most frequently occurring number in a list is called the mode. For example, the mode of the list (1, 2, 2, 3, 3, 3, 4) is 3. It may happen that there are two or more numbers which occur equally often and more often than any other number. In this case there is no agreed definition of mode. Some authors say they are all modes and some say there is no mode. The median is the middle number of the group when they are ranked in order. (If there are an even number of numbers, the mean of the middle two is taken.) Thus to find the median, order the list according to its elements' magnitude and then repeatedly remove the pair consisting of the highest and lowest values until either one or two values are left. If exactly one value is left, it is the median; if two values, the median is the arithmetic mean of these two. This method takes the list 1, 7, 3, 13 and orders it to read 1, 3, 7, 13. Then the 1 and 13 are removed to obtain the list 3, 7. Since there are two elements in this remaining list, the median is their arithmetic mean, (3 + 7)/2 = 5. Summary of types |Name||Equation or description| |Median||The middle value that separates the higher half from the lower half of the data set| |Geometric median||A rotation invariant extension of the median for points in Rn| |Mode||The most frequent value in the data set| |Truncated mean||The arithmetic mean of data values after a certain number or proportion of the highest and lowest data values have been discarded| |Interquartile mean||A special case of the truncated mean, using the interquartile range. A special case of the inter-quantile truncated mean, which operates on quantiles (often deciles or percentiles) that are equidistant but on opposite sides of the median.| |Winsorized mean||Similar to the truncated mean, but, rather than deleting the extreme values, they are set equal to the largest and smallest values that remain| The table of mathematical symbols explains the symbols used below. One can create one's own average metric using the generalized f-mean: where f is any invertible function. The harmonic mean is an example of this using f(x) = 1/x, and the geometric mean is another, using f(x) = log x. However, this method for generating means is not general enough to capture all averages. A more general method for defining an average takes any function g(x1, x2, …, xn) of a list of arguments that is continuous, strictly increasing in each argument, and symmetric (invariant under permutation of the arguments). The average y is then the value that, when replacing each member of the list, results in the same function value: g(y, y, …, y) = g(x1, x2, …, xn). This most general definition still captures the important property of all averages that the average of a list of identical elements is that element itself. The function g(x1, x2, …, xn) = x1+x2+ ··· + xn provides the arithmetic mean. The function g(x1, x2, …, xn) = x1x2···xn (where the list elements are positive numbers) provides the geometric mean. The function g(x1, x2, …, xn) = −(x1−1+x2−1+ ··· + xn−1) (where the list elements are positive numbers) provides the harmonic mean. Average percentage return and CAGR A type of average used in finance is the average percentage return. It is an example of a geometric mean. When the returns are annual, it is called the Compound Annual Growth Rate (CAGR). For example, if we are considering a period of two years, and the investment return in the first year is −10% and the return in the second year is +60%, then the average percentage return or CAGR, R, can be obtained by solving the equation: (1 − 10%) × (1 + 60%) = (1 − 0.1) × (1 + 0.6) = (1 + R) × (1 + R). The value of R that makes this equation true is 0.2, or 20%. This means that the total return over the 2-year period is the same as if there had been 20% growth each year. Note that the order of the years makes no difference – the average percentage returns of +60% and −10% is the same result as that for −10% and +60%. This method can be generalized to examples in which the periods are not equal. For example, consider a period of a half of a year for which the return is −23% and a period of two and a half years for which the return is +13%. The average percentage return for the combined period is the single year return, R, that is the solution of the following equation: (1 − 0.23)0.5 × (1 + 0.13)2.5 = (1 + R)0.5+2.5, giving an average return R of 0.0600 or 6.00%. Given a time series such as daily stock market prices or yearly temperatures people often want to create a smoother series. This helps to show underlying trends or perhaps periodic behavior. An easy way to do this is the moving average: one chooses a number n and creates a new series by taking the arithmetic mean of the first n values, then moving forward one place by dropping the oldest value and introducing a new value at the other end of the list, and so on. This is the simplest form of moving average. More complicated forms involve using a weighted average. The weighting can be used to enhance or suppress various periodic behavior and there is very extensive analysis of what weightings to use in the literature on filtering. In digital signal processing the term “moving average” is used even when the sum of the weights is not 1.0 (so the output series is a scaled version of the averages). The reason for this is that the analyst is usually interested only in the trend or the periodic behavior. The first recorded time that the arithmetic mean was extended from 2 to n cases for the use of estimation was in the sixteenth century. From the late sixteenth century onwards, it gradually became a common method to use for reducing errors of measurement in various areas. At the time, astronomers wanted to know a real value from noisy measurement, such as the position of a planet or the diameter of the moon. Using the mean of several measured values, scientists assumed that the errors add up to a relatively small number when compared to the total of all measured values. The method of taking the mean for reducing observation errors was indeed mainly developed in astronomy. A possible precursor to the arithmetic mean is the mid-range (the mean of the two extreme values), used for example in Arabian astronomy of the ninth to eleventh centuries, but also in metallurgy and navigation. However, there are various older vague references to the use of the arithmetic mean (which are not as clear, but might reasonably have to do with our modern definition of the mean). In a text from the 4th century, it was written that (text in square brackets is a possible missing text that might clarify the meaning): - In the first place, we must set out in a row the sequence of numbers from the monad up to nine: 1, 2, 3, 4, 5, 6, 7, 8, 9. Then we must add up the amount of all of them together, and since the row contains nine terms, we must look for the ninth part of the total to see if it is already naturally present among the numbers in the row; and we will find that the property of being [one] ninth [of the sum] only belongs to the [arithmetic] mean itself... Even older potential references exist. There are records that from about 700 BC, merchants and shippers agreed that damage to the cargo and ship (their "contribution" in case of damage by the sea) should be shared equally among themselves. This might have been calculated using the average, although there seem to be no direct record of the calculation. According to the Oxford English Dictionary, "few words have received more etymological investigation."[not in citation given] In the 16th century average meant a customs duty, or the like, and was used in the Mediterranean area. It came to mean the cost of damage sustained at sea. From that came an "average adjuster" who decided how to apportion a loss between the owners and insurers of a ship and cargo. Marine damage is either particular average, which is borne only by the owner of the damaged property, or general average, where the owner can claim a proportional contribution from all the parties to the marine venture. The type of calculations used in adjusting general average gave rise to the use of "average" to mean "arithmetic mean". A second English usage, documented as early as 1674 and sometimes spelled "averish", is as the residue and second growth of field crops, which were considered suited to consumption by draught animals ("avers"). The root is found in Arabic as awar, in Italian as avaria, in French as avarie and in Dutch as averij. It is unclear in which language the word first appeared. There is earlier (from at least the 11th century), unrelated use of the word. It appears to be an old legal term for a tenant's day labour obligation to a sheriff, probably anglicised from "avera" found in the English Domesday Book (1085). - Average absolute deviation - Law of averages - Expected value - Central limit theorem - Non-Newtonian calculus - Merigo, Jose M.; Cananovas, Montserrat (2009). "The Generalized Hybrid Averaging Operator and its Application in Decision Making". Journal of Quantitative Methods for Economics and Business Administration. 9: 69–84. ISSN 1886-516X. - Bibby, John (1974). "Axiomatisations of the average and a further generalisation of monotonic sequences". Glasgow Mathematical Journal. 15: 63–65. doi:10.1017/s0017089500002135. - Box, George E.P.; Jenkins, Gwilym M. (1976). Time Series Analysis: Forecasting and Control (revised ed.). Holden-Day. ISBN 0816211043. - Haykin, Simon (1986). Adaptive Filter Theory. Prentice-Hall. ISBN 0130040525. - "Studies in the History of Probability and Statistics: VII. The Principle of the Arithmetic Mean". Biometrika. 45: 130. doi:10.2307/2333051. - Eisenhart, Churchill. "The development of the concept of the best mean of a set of measurements from antiquity to the present day." Unpublished presidential address, American Statistical Association, 131st Annual Meeting, Fort Collins, Colorado. 1971. - Bakker, Arthur. "The early history of average values and implications for education." Journal of Statistics Education 11.1 (2003): 17-26. - Waterfield, Robin. "The theology of arithmetic." On the Mystical, mathematical and Cosmological Symbolism of the First Ten Number (1988). page 70. - "average". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.) - Ray, John (1674). A Collection of English Words Not Generally Used. London: H. Bruges. Retrieved 18 May 2015. |Look up average in Wiktionary, the free dictionary.|
Now that students have looked the end behavior of parent even and odd functions, I give them the opportunity to determine end behavior of more complex polynomials. The objective is that the students make the connection that the degree of a polynomial affects the graph's end behavior. The students look for end behavior patterns by entering five polynomial equations into their graphing. Students should begin working on the two polynomial division problems on page 2 of the Flipchart on a scratch piece of paper (they will record their findings later). After about 5 minutes, I plan to go over the first problem using whichever method (long division, synthetic division, or division with generic squares) that students request, maybe even all 3. Question: - O POLYNOMIAL AND RATIONAL FUNCTIONS Writing The Equation Of A Quadratic Function Given Its Gra. Find The Equation Of The Quadratic Function G Whose Graph Is Shown Below. X 5 (-6,3) O GRAPHS AND FUNCTIONS Composition Of Two Functions: Basic Suppose That The Functions P And Q Are Defined As Follows.Q. Write the equation for the parabola whose vertex is (1, -2) and passes through (2, 1).Based on the following partial set of table values of a polynomial function, determine between which two values you believe a local maximum or local minimum may have occurred. 5. a. b. 6. The following are graphs are of polynomial functions. Determine which of the following have an EVEN or ODD degree and. Homework Problems - questions from your textbook Choose from over 1500 questions including randomized numerical and algebraic with a math palette for easy entry of mathematical expressions, automatically graded graphs, fill-in-the-blank, multiple choice, multi-select, and multi-step. Notes - 6.3 - Dividing Polynomials - Day 1 Notes - 6.3 - Dividing Polynomials - Day 2 Notes - 6.4 - Polynomial Functions Notes - 6.6 - Solving Polynomial Equations Notes - 6.7 - The Remainder and Factor Theorems Notes - 6.8 - Roots and Zeros. Fundamentals of algebra, functions, graphs, polynomials, rational functions, exponential functions, logarithmic functions, systems of linear equations. Chapter 1 - Fundamentals Introduction. Section 7.2 Polynomial Functions. A2.5.2 Graph and describe the basic shape of the graphs and analyze the general form of the equations for the following families of functions: linear, quadratic, exponential, piece-wise, and absolute value. Graphing Basic Polynomial Functions The graphs of polynomials of degree 0 or 1 are lines, and the graphs of polynomials of degree 2 are parabolas. The greater the degree of a polynomial, the more complicated its graph can be. However, the graph of a polynomial function is continuous. This means that the graph has no breaks or holes (see Figure 1). DOC Algebra 2: Polynomial graphs worksheet. Honors Algebra 2: Polynomial graphs worksheet Classify as constant, linear, quadratic, cubic or quartic and give the degree and leading coefficient of each of the following functions. 1. Polynomial Functions Graphing - Multiplicity, End Behavior, Finding. Graphs of polynomials: Challenge problems Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a 501(c)(3) nonprofit organization. Graphing Polynomial Functions These refer to the various methods and techniques used to graph a polynomial function on the Cartesian plane. A general polynomial function is of the form where and .Sometimes, an online graphing calculator is used to graph some polynomial functions. Description Dugopolski’s Precalculus: Functions and Graphs, Fourth Edition gives students the essential strategies they need to make the transition to calculus. The author’s emphasis on problem solving and critical thinking is enhanced by the addition of 900 exercises including new vocabulary and cumulative review problems. Students will find carefully placed learning aids and review tools. Chapter 3 Polynomial and Rational Functions Review sections as needed from Chapter 0, Basic Techniques, page 8. Refer to page 187 for an example of the work required on paper for all graded homework unless directed otherwise by your instructor. 3.1 Polynomial Functions and Their Graphs Exercises Do the following graphs on your calculator. 1. This polynomial is much too large for me to view in the standard screen on my graphing calculator, so either I can waste a lot of time fiddling with WINDOW options, or I can quickly use my knowledge of end behavior. This function is an odd-degree polynomial, so the ends go off in opposite directions, just like every cubic I've ever graphed.
Qualitative data analysis is a process of examining non-numeric data in order to reveal themes and patterns of data to meet research objectives. The common examples of qualitative data include interviews, audios, videos and notes from observations of opinions, values, behaviour and feelings of people. Here are the steps to effectively analyse qualitative data. The first step of data analysis is transcribing your data because the data you have collected is unconstructed and hence does not make any sense to readers. Transcription makes your data presentable in textual form. You can use various software that have been specifically designed for transcription such as Nvivo, EvaSys and ATLAS.ti. Organise your data Once you have transcribed data, the next step is to organise data. Reread your research questions and organise data according to these questions. Use one-dimensional tables to explain only one feature and two-dimensional tables when you are referring to two variables. Coding means categorisation of data by assigning a label to sentences, phrases and paragraphs. Specific acts, events, activities, strategies, and any other non-quantifiable element can be coded. You can do either manual coding or use software like Nvivo and ATLAS.ti. Here are the ways to identify themes and codes: - Look for commonly used words and indigenous terms that are used by respondents. These word repetitions can indicate emotions. - Look for the frequent use of keywords in sentences - Codes can be used to explain the actions, conditions, interaction and consequences of phenomena. - Metaphors used by people to indicate their beliefs and ideology about a particular event. In short, you can divide your coding into three parts: descriptive coding that summarises the qualitative data in words; in-vivo coding uses the language of respondents; pattern coding is used to find patterns in the collected data. Following are questions that you should consider while coding the data: - What are your respondents doing? - How do your respondents understand about a particular event? - How do they communicate and what do they exactly do to handle a particular situation or event? - What did you learn while taking notes? - What did surprise/intrigue/disturb you? Data validation process ensures the data is valid and correct. The process involves code validation, data range validation, data type validation and structured validation. Data validation is crucial to analyse the validity of outcomes as well as ensuring the consistency between the produced results. Concluding data analysis This step requires you to state your findings and establish a link between outcomes and research objectives. You will also highlight pros and cons, study limitations and the areas for further research. Qualitative data analysis can be a backbreaking and frustrating job, but you can carry out it successfully if you use the right tools.
Implicit Differentiation Calculator is a free online tool that displays the derivative of the given function with respect to the variable. BYJU’S online Implicit differentiation calculator tool makes the calculations faster, and a derivative of the implicit function is displayed in a fraction of seconds. How to Use the Implicit Differentiation Calculator? The procedure to use the implicit Differentiation calculator is as follows: Step 1: Enter the equation in a given input field Step 2: Click the button “Submit” to get the derivative of a function Step 3: The derivative will be displayed in the new window What is Implicit Differentiation? In Calculus, sometimes a function may be in implicit form. It means that the function is expressed in terms of both x and y. For example, the implicit form of a circle equation is x2 + y2 = r2. We know that differentiation is the process of finding the derivative of a function. There are three steps to do implicit differentiation. They are: Step 1: Differentiate the function with respect to x Step 2: Collect all dy/dx on one side Step 3: Finally, solve for dy/dx The standard form to represent the implicit function is as follows: f (x,y) = 0 Some of the examples of implicit functions are: x2 + 4y2 = 0 x2 + y2 + xy = 1 Frequently Asked Questions on Implicit Differentiation Calculator What is meant by implicit function? The implicit function is a function where all the dependent and independent variables are kept on one side of the equation. Why do we use implicit differentiation? The process called “ implicit differentiation” is used to find the derivative of y with respect to the variable x without solving the given equations for y. Mention the difference between implicit differentiation and partial differentiation. In implicit differentiation, all the variables are differentiated. But, in partial differentiation, it involves the process of taking the derivative of one variable by leaving the other variable as constant.
R Basics (stats): Data Frames Data Frames are the tables to store data. If you recall the vectors from the first R notes data frames can be imagined as the collection of vectors with same dimension. We have already created vectors, named the vectors and plotted on histograms. In this note we will create data frames, aggregate and plot. Let’s start with baby steps and create a small data frame as a new script. You can open a new script by clicking on file and new script. You can copy and paste following lines on your new script and then select the lines and run the lines as it is shown in the figure. loan_type=c(“morgage”,”student loan”, “morgage”), Our first data frame constrained of seven vectors, Customer_Id, loan_type, First_Name, Last_name, Gender, Zip_code and amount. NOTE: R is case sensitive. That is why I have used lower and upper case for you to practice. After we run the lines we want to see how our first data frame looks. Following command will suffice that need: We already started to have some idea about our data. We are missing the gender of Customer 002 and can learn more with a couple of commands: The classes might be misleading and it is important to know the type (AKA class) of the data frame. As default R treats strings in data frames as factor. Therefore, describing “stringsAsFactors=FALSE” will be an example of best practice. If you need any of the columns as factor you can define it while you are working with data. Your data frame should be created as : loan_type=c(“morgage”,”student loan”, “morgage”), amount=c(100000,50000, 15000), stringsAsFactors=FALSE) As we already pointed out we don’t know the gender of 002 and we want to remove that record from the future aggregations. The best practice in R is to convert blank and not formatted fields to NA. So, let’s do that: #Another way of dealing with particular column would be as follows: Now we only have two records and we don’t have any missing field. Data entry into a data frame is as common as the missing fields and let’s see how we can enter a new record. HINT: View starts with capital letter; but, it is not very common for R commands. R commands usually start with lowercase letters. HINT: Pause a moment and talk about the row numbers !! The data have row names created as a default function of R. Since we omitted the row without gender information R doesn’t know the name of the row. It assigns numbers 1,2,.. It is a good habit to remember to remove the row.names. After we have done basic formatting we want to add a new demographics column which is date of birth (month and year) and format it as date.In order to simplify the formatting let’s assume they were all born in the first day of each month. my_third_df$DOB<-c(“Jan 1975″,”Feb 1939″,”Jun 1990”) # We want to keep the date information as date: my_third_df$DOB<-as.Date(paste(’01’, my_third_df$DOB), format=’%d %b %Y’) For now you might feel very confident to create and do some simple manipulation with data frames. Let’s create a second data frame. This time I will slightly create it in a different way. You will be the judge to decide on best implementation. File_No = character(), ##Lets add lines of data #Let’s add a loan date my_support_df$Loan_Date<-c(“Feb 2010”, “Jun 2015”) my_support_df$Loan_Date<-as.Date(paste(’01’,my_support_df$Loan_Date),format=’%d %b %Y’) The support data frame and the first data frame we created share the same Customer_ID. Thus, we will be able to JOIN (merge in R) by Customer_ID. HINT: try help(merge) and example(merge) for the further information. Merge provides features of inner and outer joins as well as cross join. ## In our case we have by.x =by.y.. So we can simply address it as by=”Customer_Id” Outer_join<-merge(x = my_third_df, y = my_support_df, by = “Customer_Id”, all = TRUE) Left_Outer_join<-merge(x = my_third_df, y = my_support_df, by = “Customer_Id”, all.x = TRUE) Right_Outer_join<-merge(x = my_third_df, y = my_support_df, by = “Customer_Id”, all.y = TRUE) Cross_join<-merge(x = my_third_df, y = my_support_df, by=NULL) I showed how to add names and changes of the columns and we might apply that knowledge on the data frame as well. The amount column shows the total amount of loan and it might be confusing for the future users. Let’s make it right and change the name: The support data frame included the paid amount and we may now calculate the unpaid loan amount and create a new column called Unpaind_loan.First , let’s format the Paid amount as integer. HINT: You can use other numeric formats to see how it looks and how the calculations work. I want to finish this part of the notes with basic visualization and using some functions to figure out which directory we are in and what directory we want to save our output file. ##Basic Visual Summary of the data ## Setting up the directory and saving a file Next notes will include some csv file reading and some more basics of aggregation. R basics are very powerful. The beauty of R and open source programs is that there is no one way of solving a problem. I know that you will have better solutions and better plots. Thank you for reading my notes and hope you feel more comfortable using R for your basic analysis. This article taken from http://datasciencecentral.com Kalyan Banga200 Posts I am Kalyan Banga, a Post Graduate in Business Analytics from Indian Institute of Management (IIM) Calcutta, a premier management institute, ranked best B-School in Asia in FT Masters management global rankings. I have spent 6 years in field of Analytics.
Calibrating the Moon In the early 1600′s, small telescopes appeared as novelty items for sale on the streets of France and Italy. They were simple, two-lens devices, mostly useful for spying on the neighbors, but they caught the attention of Galileo. |The Arecibo radio telescope is currently the largest single-dish telescope in the world used in radio astronomy. In 1974, Arecibo was used to broadcast a message from Earth to the globular star cluster M13.| Credit: NAIC – Arecibo Observatory, David Parker / Science Photo Library It didn’t take this famous Venetian long to figure out how they worked, and he was soon constructing telescopes for himself. By 1611, Galileo had built a yard-long, tubular instrument fronted by a small, bubbly, one-inch lens. It boasted an unimpressive 20x magnification. Peering through this toy-like device, he was able to see the four large moons of Jupiter. This discovery changed our paradigm for the solar system, and earned Galileo endless column inches in astronomy textbooks. For Galileo, setting up for observing was pretty straightforward: (1) Take telescope outdoors, (2) position eyeball near the small end, and (3) make groundbreaking finds. Needless to say, for today’s high-precision research telescopes, setup is more complicated. Arecibo is no exception. Unlike our radio astronomer colleagues, we can’t count on signals that will be heard all over the dial. Such broad-band emissions, the type spewed into space by natural cosmic broadcasters (e.g., quasars), are not much affected by terrestrial interference the forest of signals that sprout from local radar, GPS satellites, telecommunication birds, etc. Sure, these appurtenances of modern society produce nasty static at a few frequencies, but for most radio astronomy their effect is no greater than the disturbance caused by an (admittedly unusual) convention of flautists at the beach. The notes get lost in the wide-band roar of the (quasar) surf. For SETI, it’s different. We’re hunting for narrow-band signals the very same type as the man-made interference that fills the airwaves. This RFI (Radio Frequency Interference) can clearly frustrate our search. To avoid this problem, we take a cue from Charles Messier, the 18th century French astronomer who tried to help comet seekers by cataloging all the potentially confusing fuzzy objects in the sky. On our first day out, the Project Phoenix team points the telescope overhead, and locks it down. We then do RFI scans by slowly stepping up the microwave dial and noting all the narrow-band signals, and even some (such as GPS broadcasts) that are a bit less narrow. These are cataloged into an on-line database that can be used during the search to identify (and quickly toss out) persistent terrestrial signals. During the course of observations, additional earth-bound signals are found, and the database grows. We don’t bother to try and identify these signals whether theyre spy satellites or airport radars is of no concern. We log ‘em and leave ‘em. Another class of disturbing signals that could confuse our search system are the internal birdies caused by the endless racks of electronic equipment that bulge the walls of any radio observatory. These, too, are narrow-band signals (they’re called birdies because if you listen to them on a radio, they sound, well, like avians.) We catalog them to make sure that these chip-based chirpers don’t get mistaken for extraterrestrial transmitters. Once we’ve done our electronic reconnaissance, its time for an end-to-end test of the whole ball of wax. |Image of the Earth and Moon taken by Galileo spacecraft. | In previous runs of Project Phoenix, this was accomplished by picking up the transmitter from the Pioneer 10 spacecraft an extraterrestrial broadcaster that provided us with a handy test signal. Unfortunately, this distant (and thirty-year old) probe recently went radio silent. However, the SETI League, in New Jersey, regularly bounces a 200-watt signal at 1,296 MHz off the moon. They do this not for the benefit of the lunar radio audience, but for radio amateurs (hams) here on Earth. Two nights ago, we aimed the Arecibo dish moon-ward, and with a bit of fiddling and adjustment, soon found a signal that had traveled a half-million miles from New Jersey to Puerto Rico. It was a great and gratifying way to show that the system is truly attuned to the sky. Telescopes were the first instruments to extend human senses, something they now do with unprecedented power. As night falls, we will turn skyward the largest telescope ever built. The street vendors novelty has grown up.
This powerpoint begins with a warm up where students use a table to create the graph of the equation. Next, students are asked to find the slope of the line they just graphed. This example is used to illustrate how the values in a slope intercept equation relate directly to the graph of the equation. Building on this, students are formally introduced to slope intercept form with an embedded video describing the process for graphing an equation using slope and intercept. There are multiple examples as well as opportunities for students to practice. The last two slides can be printed as handouts for the students to work along with the powerpoint. This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License
Computer model of mantle convection video khan academy. The system is a fluid heated from below and cooled from above with periodic sidewalls and impermeable horizontal plates. Diagrams of convection cells in the mantle are often highly simplified, but researchers are finding that the real world is much. Convection a convection current is the flow that transfers heat within a fluid. Modelling thermal convection in the earth s mantle using. Dynamic earth mantle convection this page from the university of leeds gives an introduction to the drivers of plate tectonics, including links to illustrations depicting ridge push slab pull, ridge bathymetry, mantle tomographic data, plumes, and other topics. Natural or free convection is caused because of density difference in solids or liquids or gases due to temperature differences under the influence of gravity. These occur at midocean ridges, which are elevated higher than the rest of the ocean floor. Convection currents in the mantle result from the temperature difference between the top and bottom of the mantle. Largescale simulation of mantle convection based on a new. Last night i came up with an idea for simulating mantle convection, the slow creeping motion of earths rocky mantle caused by convection currents carrying heat from the interior of the earth to the surface. It is my goal to make the energy2d software a powerful simulation tool for a wide audience. Computational modeling of convection in the earths mantle. Mantle convection is driven by three fundamental processes. What causes convection currents in the earths mantle. Various dynamical mechanisms are discussed, but any treatment of convection requires some knowledge of relevant properties of the mantle. Types of heat transfer radiation, conduction, and convection. One theory is that convection currents within the earths mantle drive this plate motion. Largescale simulation of mantle convection based on a new matrixfree approach. Clemson mathematician helps deepen understanding of earth. Mantle convection is the slow creeping motion of earths rocky mantle caused by convection currents carrying heat from the interior of the earth to the surface. The visible quantity is the temperature of the fluid, where red and blue indicate hot and cold fluid, respectively. Mantle convection another force that has long been thought to be one of the most significant driving mehcanisms behind plate motion is mantle convection. Computer programs can be used to create the bar graph of. Layer of the earth composed mainly of granite and basalt. The churning of the mantle also affects the chemical composition of the ocean and has a longterm influence on climate. For this lab, we learned about the principle of convection, and how it helps atmospheric circulation and ocean currents. I want to simulate the heat dissipation as conduction through the lower solid block and that as convection and conduction through the above fluid. Motivated by earth mantle convection problem simulation of earth mantle convection uli rude 7 terra neo terra scale up to 10 nodes dofs. Mantle convection an overview sciencedirect topics. Mantle convection is quite different from the usual potonastove metaphor. Heat conduction and convection simulation cfd online. The continents are part of large lithospheric plates that have moved over geological time and continue to move at a rate of centimeters per year. Contains convection currents which drive plate tectonics. Convection currents in earths mantle are caused by the rise of hot material rising towards the crust, becoming cooler and sinking back down. Last night i came up with an idea for simulating mantle convection, the. Mantle convection modeling more info this research page provides links to two. Energy2d is a relatively new program xie, 2012 and is not yet widely used as a building performance simulation tool. Indeed the only existing evidence for convection in the mantle is the occurrence of earthquakes beneath island arcs, and here the sinking limb is planar, not cylindrical. Interactive visualization of 3d mantle convection experts. Rhea targets largescale mantle convection simulations on parallel computers, and thus has been. Rayleigh benard thermal convection 3d simulation with lbm simulations in process engineering. It is one of 3 driving forces that causes tectonic plates to move around the earths surface. La436s modeling convection currents science lab supplies. The dominance of mantle convection by largescale structure is a fundamental constraint in geodynamics, because simple fluid dynamical models yield a convection planform the geometry of convection cells characterized by much shorter lengthscales of the order of the mantle depth, or about 3000 km, as evidenced by the isoviscous reference simulation in the figure below. Energy2d runs quickly on most computers and eliminates the switches among preprocessors, solvers, and postprocessors typically. Due to combined action of convection currents and gravity, earths plates are in constant motion. The first earth science simulation in energy2d is here. It is these convection currents that drive plate motion. Press international office faculties academic career program. Those mantle movements underneath are called convection currents. The figure shows the current understanding of the interaction between the. A brief description of the results of this experimental program has been. Most current global convection codes are based on finite element fe. How can the mantle flow by convection if it is a solid. Energy2d interactive heat transfer simulations for everyone. High accuracy mantle convection simulation through modern. High accuracy mantle convection simulation through modern numerical methods ii. Convection happens when particles move from high temperature to low temperature areas in a material. What is the reason for convection currents in the mantle. The four images shown at the top of the page illustrate the process of mantle convection. The heat source is the earths core and the mantle itself. At the same time, most of these methods including the ones in our earlier paper were developed, tested, and eval. If youre behind a web filter, please make sure that the domains. The slow creeping motion of earths solid silicate mantle caused by convection currents carrying heat from the interior to the planets surface. These convection currents move lithospheric plates through three main hypotheses. How are convection current in the mantle and in the oceans. Without heat, convection currents will stop as all material reaches the same temperature. How to be productive at home from a remote work veteran. The calculations were performed in spherical axisymmetric geometry, with the. The extreme heat in the center of the earth causes convection currents in the asthenosphere which is made of hot molten material on which the lithospheric plates move. Rhea builds solvers for mantle convection problems on a collection of new libraries for parallel dynamic amr burstedde et al. View a moving model of convection currents in earths mantle. The software program energy2d is used to solve the dynamic fourier heat transfer equations for the convective concrete case. Read and learn for free about the following article. Simulation of deep earth convection with blue colours in the interior outlining sinking subducted slabs and red colours outlining rising plumes in the earths interior, while the system of. Many geologists think that plumes of mantle rock rise slowly from the bottom of the mantle toward the top it then eventually cools and sinks back through the mantle the. Realistic models and problems timo heister1 juliane dannberg2 rene gassm oller3 wolfgang bangerth4 1 mathematical sciences, clemson university, o110 martin hall, clemson, sc 296340975, usa. Rhea targets largescale mantle convection simulations on parallel computers, and thus has been developed with a strong focus on computational efficiency and parallel scalability of both mesh handling and numerical solvers. Modeling mantle convection currents problem how might convection in earths mantle affect tectonic plates. In geophysics convection currents are often drawn as closed circular cells, but they need not have such a shape. A large bowl of several superposed fluids and ice cubes in a microwave oven, programmed to decrease the power with time, would be a better, but still incomplete, analogy. It is my goal to make the energy2d software a powerful simulation tool for a. Mantle convection and plate tectonics if youre seeing this message, it means were having trouble loading external resources on our website. Modeling convection for plate tectonics essential question nsta. Mantle convection is the very slow creeping motion of earths solid silicate mantle caused by convection currents carrying heat from the interior to the planets surface the earths surface lithosphere rides atop the asthenosphere and the two form the components of the upper mantle. Explain to the students that this activity requires excellent observation skills. Perform thermal simulations in your browser using simscales cloudbased simulation software. These advanced models require efficient software frameworks that allow for high spatial resolutions and combine sophisticated numerical algorithms with excellent parallel efficiency on. Convection currents lab christians marine science page. This snapshot was taken from a movie based on data from a rayleighbenard convection simulation. Mantle convection is the main way heat from earths interior is transported to its surface, and this heat escapes principally through midocean ridges. Aug 08, 2012 it is my goal to make the energy2d software a powerful simulation tool for a wide audience. Many materials and products have temperaturedependent characteristics. Mantle convection is the slow creeping motion of earths solid silicate mantle caused by convection currents carrying heat. This process occurs repeatedly, causing the currents to constantly flow. Weiss department of applied mathematics and theoretical physics, university of cambridge received 22 december 1972 and in revised form 24 september 1973. Lmu geophysicists use computationally intensive simulations to. Specifically, we address the following interconnected topics and the strategies used in our. Thermal convection in the terrestrial planetary mantles, the rocky layer between crust and core, inwhich hotmaterial rises, cold material sinks and the induced. Pdf largescale adaptive mantle convection simulation. Numerical models of mantle convection are starting to reproduce many of the essential. It has long been known that throughout the mantle there are convection currents circulating, caused by the difference in temperature at the earths interior and surface. Mantle convection driving force for plate tectonics mountain building and earthquakes simulation of earth mantle convection uli rude 2 terra neo terra mantle has 1012 km3 inversion and uq blow. Students have to select a time period for the simulation to run before they can select the run button. Energy2d runs quickly on most computers and eliminates the switches among preprocessors, solvers, and postprocessors typically needed to perform computational fluid dynamics simulations. The purpose of this lab is to bring you to an understanding of the connection between the convection currents within the mantle of the earth and the moving of earths plates. In fact, the connected midocean ridge system is in essence a 80,000 km long volcano. High accuracy mantle convection simulation through modern numerical methods 3. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Mantle convection this page discusses thermal convection as it applies to the earths mantle and includes three quicktime movies for three different cases of convection. Current ideas on mantle plumes posit that they are jets, narrow upwelling currents. Ridge push occurs from the convection currents in the ocean. The parameters, and g are the thermal conductivity, heat sources, and gravity vector, respectively. Roberts department of geodesy and geophysics, university of cambridge and n. In this paper, we summarize the current state of the art in numerical methods and computational science for problems of the kind that appear in the simulation of convection in the earths mantle, and give an overview of the methods implemented in aspect. Convection currents in the magma drive plate tectonics heat generated from the radioactive decay of elements deep in the interior of the earth creates magma molten rock in the. Jul 19, 2012 youtubes best convection currents video. The model includes prescribed surface velocities and a dense basal layer re. Mantle convection is also important on venus and mars. Thermal simulation and analysis software in the cloud. Jun 14, 2016 simulation of deep earth convection with blue colours in the interior outlining sinking subducted slabs and red colours outlining rising plumes in the earths interior, while the system of. High accuracy mantle convection simulation through modern numerical methods. The current generation of codes can reach into this range, either through. Mantle convection and plate tectonics khan academy. Convection currents in this layer of the earth are responsible the heating of the mantle which allows it to move in convectio what is true of the density of the mantle material that is ris convection currents in this layer of the earth are responsible the direct transfer of. A new generation, parallel adaptivemesh mantle convection code, rhea, is described and benchmarked. Realistic models and problems article in geophysical journal international 2102 february 2017 with 73 reads. As we move out of the core towards the mantle the temperature decreases but high enough to melt maximum of the minerals and rocks. The simulation of convection in the earths mantle is complicated by a host of problems related to the mathematical structure of the equations as well as of the disparity of the lengthscales implied by the sizes of physical coefficients in the earth. Convection is one of the major modes of heat transfer. The simulation was run with the aspect software and used adaptive mesh refinement, between 20 and 100 million. The final state of a global mantle convection simulation after 250 ma of model time. Pdf plate tectonics and convection in the earths mantle. Aspect relies on the numerical software packages deal. Simulation of mantle convection and evolving tectonic. Heat transfer the movement of energy from a warmer object to a cooler object why will this girl burn her hand if she touches the pot. Simulation of mantle convection and evolving tectonic plate. Plate tectonics and convection in the earths mantle. Convection usually refers to particle movement in fluids, but solids can also flow. A model for the convection in the earths mantle over the last 250 million years. Mantle convection and plate tectonics article khan academy. In contrary, convection causes material in the mantle to flow. Numerical simulation of the processes in the earths mantle is a key piece in. Mantle convection using adaptive mesh refinement this page from the university of texas at austin gives a snapshot and brief description of an adaptive mesh refinement amr global mantle convection simulation. Materials large plastic container or beaker food coloring small beaker small piece of aluminum foil rubber band several pieces of paper about 0. Diagrams of convection cells in the mantle are often highly simplified, but researchers are finding that the real world is much more complex. The current availability of thousands of processors at many high performance computing centers has made it feasible to carry out, in near real time, interactive visualization of 3d mantle convection temperature fields, using grid configurations having 10100 million unknowns.425 864 1149 633 109 713 553 287 965 1082 1210 1306 555 736 1238 1191 1225 200 1306 319 877 958 1261 163 1180 605 1140 1438 335 1284 438 987 1112 758 180 1244 1445 287 774 512 462 602