text stringlengths 255 17.6k |
|---|
Graphic developed by: Steven E. Hall
The Webster dictionary defines a satellite as a man-made object put into orbit around a celestial body, like the earth or the moon. Satellites serve a wide variety of purposes from transmission of television signals via communication satellites to guidance and tracking systems of defense satellites. For meteorologists, satellites provide a comprehensive view of the world's weather by observing weather and the environment on a scale not possible by other means.
On April 1, 1960, the nation's first weather satellite, "TIROS I" was launched into orbit. Soon after, meteorologists saw the first pictures of a midlatitude cyclone over the northeastern United States. A new era had begun. Since then, weather satellites have been launched into orbit and their capabilities have improved significantly. Today, not only do satellites observe clouds, but measure other non- visible radiation from the earth and atmosphere. This helps us to estimate such aspects as crop and soil conditions as well as monitor concentrations of atmospheric ozone and many other global characteristics.
The purpose of this module is to examine Earth observing satellites and their capabilities in greater detail, focusing on two satellite orbital groups in particular; Geostationary Operational Environmental Satellites (GOES) and Polar Orbiting Environmental Satellites (POES). Finally, this module will demonstrate how to interpret visible, infrared and water vapor channel satellite images.
The navigation menu (left) for this module is called "Satellites" and the menu items are arranged in a recommended sequence, beginning with this introduction. Click on the menu item of interest to go to that particular section. More details about the navigation system or the WW2010 web server in general are available from About This Server. |
Andrew Jackson Houston made history on June 2, 1941, when he became the oldest man ever to become a freshman United States senator. When Andrew was born, eighty-six years earlier, his father—Sam Houston—held the Texas U.S. Senate seat that his son would later occupy.
As the son of Sam Houston, Andrew had enjoyed celebrity status throughout Texas in the late nineteenth and early twentieth centuries. He attended West Point, joined a law firm, rose up through the ranks in the Texas National Guard, organized a company of "Rough Riders" for Teddy Roosevelt during the Spanish-American War, and served as a U.S. marshal. From 1924 until 1941, he superintended the San Jacinto Battlefield where his father in 1836 had won his greatest military victory.
In the spring of 1941, on the 105th anniversary of the Battle of San Jacinto, the governor of Texas visited Houston at his ranch. After a few moments of casual conversation, Governor W. Lee O'Daniel told Houston that he would like to appoint him to a U.S. Senate seat that had recently fallen vacant on the death of an incumbent.
O'Daniel made the offer as part of his strategy to win the seat for himself in an upcoming special election. Rather than choosing an active political figure, who might benefit from the visibility of an interim appointment in a primary election that would eventually attract twenty-six candidates, O'Daniel chose a non-threatening political icon to keep the seat warm until the election.
Defying the wishes of his family and physician, the frail Houston agreed and made the arduous journey by rail to Washington. On June 2, 1941, in the Senate chamber crowded with those wishing to glimpse a bit of history in the making, Houston took his oath from Vice President Henry Wallace. Deteriorating health, however, limited his actual participation in Senate proceedings to four days. He attended one committee meeting and introduced two bills, both related to preserving the memory of his father's contributions. Then he entered Johns Hopkins University Hospital, where, on June 26, he died from complications of surgery
Thus ended the longest father-son life span in Senate history. The father was born in the first administration of President George Washington; the son died in the third term of Franklin Roosevelt.
Two days later, Governor O'Daniel easily won election to the balance of the vacant term. |
Right: A false-color collage of Mars showing four hemispheric views at 90 degree intervals. Colors correspond to elevations measured by the Mars Orbiter Laser Altimeter on the Mars Global Surveyor spacecraft. Red and white colors denote high elevations; blue denotes low.
Generated by the Mars Orbiter Laser Altimeter (MOLA), an instrument aboard NASA's Mars Global Surveyor, the high-resolution map represents 27 million elevation measurements gathered in 1998 and 1999. The data were assembled into a global grid with each point spaced 37 miles (60 kilometers) apart at the equator, and less elsewhere. Each elevation point is known with an accuracy of 42 feet (13 meters) in general, with large areas of the flat northern hemisphere known to better than six feet (two meters).
"The full range of topography on Mars is about 19 miles (30 kilometers), one and a half times the range of elevations found on Earth," noted Dr. David Smith of NASA's Goddard Space Flight Center, Greenbelt, MD, the principal investigator for MOLA and lead author of a study to be published in the May 28, 1999, issue of Science.
"The most curious aspect of the topographic map is the striking difference between the planet's low, smooth Northern Hemisphere and the heavily cratered Southern Hemisphere," which sits, on average, about three miles (five kilometers) higher than the north, Smith added. The MOLA data show that the Northern Hemisphere depression is distinctly not circular, and suggest that it was shaped by internal geologic processes during the earliest stages of martian evolution.
The massive Hellas impact basin in the Southern Hemisphere is another striking feature of the map. Nearly six miles (nine kilometers) deep and 1,300 miles (2,100 kilometers) across, the basin is surrounded by a ring of material that rises 1.25 miles (about two kilometers) above the surroundings and stretches out to 2,500 miles (4,000 kilometers) from the basin center.
This ring of material, likely thrown out of the basin during the impact of an asteroid, has a volume equivalent to a two-mile (3.5-kilometer) thick layer spread over the continental United States, and it contributes significantly to the high topography in the Southern Hemisphere.
The difference in elevation between the hemispheres results in a slope from the South Pole to North Pole that was the major influence on the global-scale flow of water early in martian history. Scientific models of watersheds using the new elevation map show that the Northern Hemisphere lowlands would have drained three-quarters of the martian surface.
On a more regional scale, the new data show that the eastern part of the vast Valles Marineris canyon slopes away from nearby outflow channels, with part of it lying a half-mile (about one kilometer) below the level of the outflow channels.
"While water flowed south to north in general, the data clearly reveal the localized areas where water may have once formed ponds, " explained Dr. Maria Zuber of the Massachusetts Institute of Technology, Cambridge, MA, and Goddard.
Left: This dramatic three-dimensional visualization of Mars' north pole is based on elevation measurements made by an orbiting laser. During the Spring and Summer of 1998 the Mars Orbiter Laser Altimeter (MOLA) flashed laser pulses toward the Martian surface from the Global Surveyor spacecraft and recorded the time it took to detect the reflection. This timing data has now been translated to a detailed topographic map of Mars' north polar terrain.
The upper limit on the present amount of water on the martian surface is 800,000 to 1.2 million cubic miles (3.2 to 4.7 million cubic kilometers), or about 1.5 times the amount of ice covering Greenland. If both caps are composed completely of water, the combined volumes are equivalent to a global layer 66 to 100 feet (22 to 33 meters) deep, about one-third the minimum volume of a proposed ancient ocean on Mars.
During the ongoing Mars Global Surveyor mission, the MOLA instrument is collecting about 900,000 measurements of elevation every day. These data will further improve the global model, help engineers assess the area where NASA's Mars Polar Lander mission will set down on Dec. 3, and aid the selection of future landing sites. MOLA was designed and built by the Laser Remote Sensing Branch of the Laboratory for Terrestrial Physics at Goddard. The Mars Global Surveyor mission is managed for NASA's Office of Space Science, Washington, DC, by the Jet Propulsion Laboratory, Pasadena, CA, a division of the California Institute of Technology.
Above: A flat map generated by the Mars Orbiter Laser Altimeter, an instrument aboard NASA's Mars Global Surveyor, the high-resolution map represents 27 million elevation measurements gathered in 1998 and 1999. MOLA topographic images may be viewed at the following web address: http://pao.gsfc.nasa.gov/gsfc/spacesci/pictures/mola/mars3d.htm
More details about the MOLA instrument and science investigation can be found at: http://ltpwww.gsfc.nasa.gov/tharsis/mola.html. |
Do you enjoy going camping? Reading a good book? That ice-cream you recently had? Whether you appreciate it or not, you cannot deny the important role our earth’s natural habitats have in our daily lives.
We depend on terrestrial ecosystems and forests as an essential source of materials, food, product ingredients and also for livelihoods. The Sustainable Development Goal 15 on ‘life on earth’ is all about protecting, restoring and promoting the sustainable use of all terrestrial ecosystems so that we can have a better future.
Why SDG 15 is important
The world has lost over 12 million hectares of forest every year from 2000 to 2010. Expansion of commercial agriculture, especially large-scale farming, has largely caused this significant loss of forest cover. Nevertheless, damaging the environment, particularly forests, comes at great costs.
Here is an illustration of the chain effect:
- When forests get lost through deforestation, biodiversity is also lost, leading to low crop productivity and value.
- When forest cover is affected, climate change effects increase – deforestation accounts for nearly 15% of worldwide emissions – which enhances desertification, a problem that already negatively affects over 1 billion people around the world.
SDG 15 Key targets
Due to the importance of this SDG goal, it has some tough targets that include:
- Ensure the sustainable use, restoration and conservation of inland and terrestrial freshwater systems, particularly forests, mountains, drylands, and wetlands, according to the obligations set out in international agreements.
- Promote sustainable management of forests, stop deforestation, considerably increase reforestation and afforestation globally as well as repair degraded forests.
- Ensure the effective conservation of ecosystems around mountains, including their rich biodiversity, to improve their ability to offer the essential benefits necessary for real sustainable development.
- Fight desertification, restore damaged soil and land, including land ravaged by floods, drought and desertification.
- Take crucial action so as to reduce dilapidation of natural habitats, stop biodiversity loss and prevent and protect threatened species against extinction.
- Promote equitable and fair sharing of benefits that arise from the use of any genetic resources as well as promoting suitable access to these resources, as globally agreed.
- Take swift action to put an end to poaching and subsequent trafficking of all protected species and also address the supply and demand of prohibited wildlife products.
- Introduce new measures to reduce significantly or prevent the unwanted introduction of foreign invasive species on water and land ecosystems and eradicate or control the main species.
- Integrate biodiversity and ecosystem values into local and national planning, poverty reduction plan, and development processes.
- Significantly increase and mobilise financial resource to ensure the sustainable use and conservation of ecosystems and biodiversity.
- Mobilise essential resources from every available resource at all financial levels to finance truly sustainable management of forests, and offer sufficient incentives to the developing nations to use such management, particularly for reforestation and conservation.
- Improve universal support for all efforts to fight trafficking and poaching of all protecting species, by enhancing the ability of local affected communities to seek sustainable employment opportunities.
So far, we have only managed to safeguard about 14% of coastal marine and terrestrial areas, but that is not enough. The main challenge of this particular SDG is to enhance our efforts and to take conservation more seriously.
Impact on SDG 15 on businesses
One place where businesses can begin to work on these targets mentioned above is the supply chain. Simply ask the following questions, in regards to your main source of materials:
- Where do you get your materials? Regardless of whether you’re a general consumer or a retailer.
- How do your suppliers source the materials?
- What can you say about your suppliers and their way of conducting business?
Through answering these questions, you will have a better view of the supply chain, which will reveal where any negative environmental effects are and the necessary steps taken.
An example of a company that exemplifies this is Unilever, which strives always to source palm oil sustainably. This commonly used element is present in a wide range of productions from shampoo and soap to margarine and ice-cream and is infamous for being associated with deforestation.
Unilever achieved its target of only sourcing from sustainable palm oil sources and is now seeking to have total traceability to certified sustainable sources. Thus, this means any prospective Unilever supplier, which is among the biggest consumer goods companies globally, must ensure that their environmental practices are up to date. Most importantly, suppliers should realise that Unilever’s efforts are just part of the broader approach by organisations to contribute to Sustainable Development Goals.
Companies across the world can take one united approach towards changing industry practices and standards. In 2010, CGF or Consumer Goods Forum agreed on a resolution against deforestation to attain zero total deforestation by 2020. In line with this goal, the CGF released the first ever sourcing guidelines for palm oil to help retail and consumer goods companies to design green policies and assist in meeting the zero deforestation target.
Business can even do the one thing they are good at and place a monetary value on our environment. Leading the industry in this regard is Puma, with their recently established Environmental Profit and Loss Account that seeks to measure the environmental effects of their supply chain and operations financially.
Puma’s parent organisation, Kering, has also established a broad EP&L to cover the whole group of companies. Their green business commitment is visible in their unique open-sourcing methodology that allows more businesses to trace their carbon footprint easily.
Conserving the environment is an important task for all
It should not go unobserved that meeting the targets of this particular SDG will assist in addressing others. For instance, deforestation reduces biodiversity and clean water accessibility (SDG 6), and in the developing nations it may mean fewer opportunities for indigenous people, women, and rural communities (SDG 5 and 8).
Thus, protecting the forests, preventing desertification and conserving biodiversity are crucial goals. It is no longer enough for businesses just to think of the environmental effects arising from their operations. Businesses must instead act and grow in a unified manner that recognises the pressure points and connects across supply chains and industries to sustain the natural habitats and effect widespread positive change. |
Algebraic Properties and Literal Equations
Lesson 5 of 8
Objective: SWBAT solve literal equations by applying Algebraic Properties to geometric formulas.
I intend for today's Warm Up to take about 10 minutes for the students to complete and for me to review with the class. I allow students about three minutes to work on their own. Their task is to write down all of the algebraic properties they can name, then to define a literal equation. After giving them time to write, I take responses from the students and write the list of algebraic properties on the board. We then compare the responses to a sheet from the Virginia Department of Education that lists all of the algebraic properties. It is found at the following website:
As I review each property, I provide examples to demonstrate the properties. When we get to the definition of a literal equation, we discuss how this name applies to a formula in which all of the variables represent real numbers. Therefore, I say, "real number properties, such as these algebraic properties, apply to operations on literal equations as well as one-variable equations."
Teacher's Note: While reviewing the properties, I stress how all of the operations can be completed using addition or multiplication. I focus particular attention on the multiplicative inverse, taking an opportunity to re-teach this when reviewing problems that give my students difficulty. I demonstrate my approach in the video below.
Whole Class Activity
After reviewing the Warm Up, I hand each table a laminated copy of a PARCC High School Reference Sheet that I have created for this activity. My students are scheduled to take the PARCC Exam in Spring 2015. Based on the current schedule, the assessment will be held three-fourths through the curriculum, and, at the end of the year. I found the Reference Sheet at the following website:
The main objective of this activity is for my students to be able to solve one of the geometric formulas on the PARCC reference sheet for an indicated variable. I also want my students to be able to recognize different forms when solving equations. I have created a set of geometric formulas as individual cut-outs from the reference sheet. I place them in a cup for a random draw.
To start the activity, I draw a formula out of the cup. I ask the students to solve the formula for a specific variable (that I choose) on their individual white boards. As a student displays his/her work, I ask him/her to identify when an algebraic property is being used to solve the literal equation for the variable. I will probably probe for an explanation of why it works in this particular situation. As the class listens to explanations, I ask my students to write down different approaches, to help them recognize that flexibility can make things easier (or harder) based on the choice of method.
For example, if I draw the formula for the Volume of a Sphere first, I will ask my students to solve for r. Here are two examples of student work:
When comparing the work of these two students, I will point out that in Example 2 the student simplifies the formula further. I'll say, "Of course, that does not mean that Example 1's work is incorrect." Then we will discuss why both formulas are correct, and, equivalent.
I use today's Exit Slip as a formative assessment to check for student understanding of literal equations and how to apply the algebraic properties. I have used names for variables, rather than letters, to reinforce the theme that the properties apply to real numbers and variables that represent them. I plan to distribute the Exit Slip with about 10 minutes remaining in class.
The work on the Exit Slip should show the students ability to solve for Mass in the Density formula, and use that formula to solve an application problem. When reviewing the Exit Slip with students, I will discuss that requested calculations can be found by solving for Mass, first, and then making substitutions. However, I will also discuss how substitutions could be made first, and then solved for the Mass. If time allows, we may discuss when it makes sense to use one or the other approach. |
The Beginnings of the Cold War 1945-1949
THE YALTA CONFERENCE FEB 1945
-February 1945, the Allied leaders (Stalin of the USSR, Roosevelt of the USA and Churchill of Britain) met in Yalta in Ukraine to plan what was going to happen to Europe after Germany's defeat in the Second World War.
-This is what the Big Three decided on:
- Stalin (USSR) agrees to enter war against Japan after Germany surrenders
- Big Three agrees that Germany would be divided into four zones- American, French, British and Soviet. The German capital, Berlin, would also be divided into four zones.
- Countries that were liberated from the German army would be allowed to hold free elections.
- The Big Three agree to join the United Nations Organisation, aiming to keep peace.
- The Big Three agree that eastern Europe would be under 'a Soviet sphere of influence'.
-What was going on behind the scenes at Yalta?
- Although the Big Three appear to get on well at Yalta, in reality they mistrusted each other and disliked the countries' policies.
- Stalin viewed Roosevelt as a 'trickster', knowing that he does not support pr approve of communism.
- Stalin also viewed Churchill as an even bigger 'trickster' than Roosevelt, thinking Churchill to be 'dangerous'. Churchill does not support or approve of communism either.
- Churchill thought communism to be 'dangerous' and a threat to commercialism and the West. He mistrusted Russians
- Churchill also thought Roosevelt to be too 'pro-Russian'
THE POTSDAM CONFERENCE JULY-AUGUST 1945
-Three months after the Yalta Conference, Hitler commits suicide and the Second World War was over. A second conference was arranged in Potsdam, Berlin.
-The changes that had occurred in the five months between the two conferences were:
- Stalin's armies now occupied most of eastern Europe, including the Baltic States, Finland, Poland, Czechoslovakia, Hungary, Bulgaria and Romania. Stalin had setted up a communist government in Poland, which was against the wishes of the majority of the Poles, Britain and USA. Stalin claimed that his control of eastern Europe was a defensive measure against possible future attack.
- The USA also had a new president, Harry Truman. Truman was different from Roosevelt in the sense that he was more suspicious of Stalin and communism.
- The USA had successfully tested an atomic bomb, which indirectly threatened the USSR as now the USA had an advantage over them.
Halfway through the Potsdam conference, Churchill was replaced by a new president, Clement Atlee. The conference became dominated by by rivalry between Stalin and Truman.
-Disagreements between Stalin and Truman at Potsdam:
- Stalin wanted to cripple Germany in order to protect the USSR from future threats. Truman did not want to repeat the mistakes of the Treaty of Versailles after the First World War.
- Stalin wanted compensation from Germany in the form of reparations for the millions of Russians killed in the war and the destruction in Russia. Truman resisted this demand because he did not want to repeat the mistakes of the First World War.
- Although it was agreed that… |
This website is dedicated to research and discussion for the old testament of the Bible.
Navigate the old testament books below for direct links to the content or read the history of the old testament below
The Old Testament is a Christian term for religious writings of ancient Israel, which was sacred to the Jews and Christians. and a number of these writings vary greatly between different religions, and Protestants, who accept only 39 books, Catholic, Orthodox, Coptic and etiopskitsarkvi identify a much larger collection.
Can be divided into books to the Bible, which tells how God chose Israel to be the chosen people, and history books, tells the story of Israel from the invasion of Canaan, and opening their own exile in Babylon, and poetic, and “wisdom” in the books that deal and warned the different forms, with issues of good and evil in the world, and the biblical books of the prophets of the consequences of moving away from God. For Jews who are the original authors and readers, these books have the unique story of his relationship with God, but the widespread nature of the Christian missionary from the beginning of the Christian faith to see the Old Testament as a preparation for the New Testament.
Old Testament Hebrew Bible Composition
The first five books – Genesis, Exodus, Leviticus, Numbers and Deuteronomy the book – to make up pentateuch, Israel has created for the story of Moses’ death. Today, few scholars in the Persian period (538-332 BC), without a doubt that it has now arrived in the form, and its author, who is currently controlled by Temple the book of Joshua, Judges, Samuel Seoul elite returnees and the King in exile, the history of Israel the formation of the conquest of Canaan after the fall of Jerusalem, a single work (so-called “Deuteronomistic History”), as the 6th century origins in the Babylonian exile, a wide consensus among experts two record books and pentateuch Deuteronomistic History and the 4 BC , covering the same material to date.
About half the contents of the entire history of the Old Testament books. The rest of the prophets – Isaiah, Jeremiah, Ezekiel, Daniel, and 12 “minor prophets” – written between 8 and 6 th century, with the exception of Jonah and Daniel, that is, after and “wisdom” and other books – Job, Proverbs, and similar books – from the 5th century BC to the second or first dates, and the Psalms.
God is consistently described that created the world and directs its history. He did not, however, consistent with the only God who is there – apparently the only developed monotheism at the time of the Babylonian exile in the 6th century BC. Nevertheless, he is always described only God whom Israel is to worship, and a large part of the historic part of the Old Testament is about Israel’s ongoing struggle between God and the gods of Canaan, for the loyalty of the Israelites. Nonetheless, both Jews and Christians have always interpreted the Bible to strengthen the unity of God.
The Old Testament also emphasizes the special relationship with God and God’s chosen people, Israel. This relationship is expressed in the biblical covenant ((contract) between the transmitted Moses. The law codes of books such as Exodus and Deuteronomy in particular, are the terms of the contract from the Israeli side, on God’s side, he vows to be a special protector of Israel and supporter.
Other themes of the Old Testament are the salvation, redemption, judgments, obedience and disobedience, faith and loyalty, for example. Size is a strong emphasis on ethics, and ritual purity, both of which are God’s requirements, although some of the prophets and the wisdom of the authors seem to question this, saying that God demands of social justice in purity, and perhaps not even care about cleanliness at all. The Old Testament moral code enjoins justice, intervention on behalf of the vulnerable, those in power and duty to the jurisdiction of righteousness. It denies the murder, bribery and corruption, fraudulent trading, and many of the sexual misdemeanors. All morality is traced back to a God who is the source of all goodness.
The question of evil plays a large part of the Old Testament. The problem in front of the Old Testament writers was that the good God has been the root cause of that catastrophe (meaning, in particular, but not limited to, the Babylonian exile), his people. The theme is played, with many variations, in books as different as the histories of Kings and Chronicles, as the prophets Ezekiel and Jeremiah, and wisdom in books like Job and Ecclesiates.
Protestant Old Testament and Greek, Latin
The process by which the writings of cannon fire, and the Bibles were long, and the complexity of the variety of the Old Testament, which to this day. Approximately 5 century BCE, the Jews saw the five books of the Torah (Old Testament books of Moses), the authoritative status of the 2 century BCE, the prophets were similar role, though without quite the same level in relation to the Torah, more than the Jewish scriptures a liquid in which the different groups will see the authority of different books.
The Scriptures were translated into Greek between about 280-130 BC. Greek writings, called the Septuagint includes a number of books not found in modern Hebrew Bible (1-2 Esdras, Judith, Tobit, 1-4 Maccabees, Wisdom of Solomon, Sirah, Baruch, and numerous additions to the other books), which is loosely based on the chronology and “typology of literature” (ie the target). It continues in use to this day as the Old Testament, the Orthodox Church.
In 331, Constantine I commissioned Eusebius deliver fifty Bibles for the Church of Constantinople. Athanasius recorded Alexandrian scribes around 340 preparing Bibles for Constans. Little else is known, although there is much speculation. For example, has been speculated that this might be the motivation for the Canon set lists, and that the Codex Vaticanus and Codex Sinaiticus are examples of these Bibles. Together with the Peshitta and the Codex alexandrinus, these are the earliest preserved in the Christian Bible. There is no evidence among the canons of the first Council of Nicaea any determination of the Canon, however, Jerome (347-420), in his Prologue Judith, makes the claim that the Book of Judith was “discovered the Nicene Council, is among the calculated amount of the scriptures.”
Western Christianity, or Christianity in the western half of the Roman Empire, Latin was displaced Greek the common language of the early Christians, and about 400 CE by Pope Damasus I commissioned Jerome, a leading researcher of day, to produce updated in the Latin Bible to replace the Vetus Latina. Sometimes hundreds of years after the Septuagint (when it is disputed), the Jewish rabbis (religious scholars and teachers) is defined as the canon of the Hebrew Bible, much shorter than the canon of 24 books, and Jerome is used instead of the Greek Old Testament on the basis of his translation, relying on the “Hebraica Veritas,” or “The Truth Hebrew “. His Vulgate (eg common language) of the Old Testament became the standard Bible used in the Western church, specifically Sixto-Clementine Vulgate, the churches continued to the east, and continue, use the Septuagint.
Jerome wanted to drop all the books that do not appear in the Hebrew Bible, but St. Augustine, Bishop and another great day of the researcher, were opposed to him, and won the argument, notably at the Council of Carthage, 28 August 397 16-century Protestant reformers of the question again, and settled in Jerome, but only to their own churches: Protestant Bibles even though they are now only in books, which appear in the Jewish Bible, they have them, so that the Greek Bible. The Catholic Church, largely in reaction to this attack on the tradition, formally accepted canon, the canon of Trent, which can be seen in the following Augustine Carthaginian Councils or the Council of Rome and includes most but not all, of the Septuagint (3 Ezra 3 and 4 Maccabees is not taken into account); Anglicans after the English Civil War was a compromise position, to restore Article 39, with additional books, which was closed by Westminster confession of faith, but only for private study and reading in the churches, while the Lutherans kept them private research gathered in Appendix as the biblical Apocrypha.
Although the Hebrew, Greek and Latin versions of the Hebrew Bible’s best-known Old Testament, there were others. At the same time a lot of Septuagint was produced, the translations were made into Aramaic, the language of Jews living in Palestine and the Middle East, and likely to be the language of Jesus: this is called Tergums Aramaic, the word that means “translation”, and was used to help congregations understand the writings of the Jews . For the Aramaic Christians was a Syrian translation of the Hebrew Bible, called the Peshitta, and versions of the Coptic (the general language of Egypt, the first Christian century, descended from ancient Egypt), Ethiopic (for use in the Ethiopian Church, one of the oldest Christian churches), Armenian (Armenia, a former kingdom, now part of the modern north-east of Turkey, was the first to adopt Christianity as its official religion), and Arabic. |
Scientists at Ohio State University used mass spectrometry and a series of experiments to discover how cells make the amino acid, a process that until now had been unknown.
They confirmed that pyrrolysine is made from enzymatic reactions with two lysine molecules – a surprising finding, given that some portions of its structure suggested to researchers that it might have more complex origins.
The research is published in the March 31 issue of the journal Nature.
Pyrrolysine is rare and so far is known to exist in about a dozen organisms. But its discovery in 2002 as a genetically encoded amino acid in methane-producing microbes raised new questions about the evolution of the genetic code. Pyrrolysine is among 22 amino acids that are used to create proteins from the information stored in genes. Proteins are essential to all life and perform most of the work inside cells.
This information about how it is produced – its biosynthetic pathway – offers a more complete understanding of how amino acids are made. And because of its rarity, this molecule is emerging as a handy tool for manipulating proteins in biomedical research. With its production mechanism identified, scientists can use that information to devise ways to mass-produce similar or identical synthetic molecules for a variety of research purposes.
The Ohio State scientists had a genuine “ah-ha” moment over the course of the study. As part of their experimentation, they combined lysine with one other amino acid and some enzymes and expected this to produce what is called an intermediate – essentially, a piece of an amino acid that is generated in the biosynthesis process.
They had labeled the lysine so it would appear heavier than normal when observed using mass spectrometry. But one signal produced by the instrumentation had a much different mass than could be attributed to the intermediate.
“We weren’t seeing this weird molecule made from two different amino acids that we were expecting. We were seeing the regular pyrrolysine molecule and all of it was coming from lysine. Every bit of it,” said Joseph Krzycki, professor of microbiology at Ohio State and senior author of the study. “That was the only way we saw pyrrolysine, and all of it was labeled with lysine. That’s the basic observation here. And it’s a real surprise.”
The finding that lysine was the only precursor was a surprise because the production process ended up being so simple – even though arriving at it was not a simple task, partly because some of the chemical reactions had never been observed before.
“What amazes me about the entire chemical pathway is that you need only three enzymes and two molecules of the same thing that together make one complete molecule that looks completely different from what you started with,” said Marsha Gaston, first author of the paper and a doctoral student in microbiology. “You have one portion that looks exactly like the precursor, but then you have another portion that enzymes are able to re-arrange in a way that is completely unique and never seen before.”
Mass spectrometry, an analytic technique that enables precision in determining the mass of particles, ended up being critical to the discoveries, Krzycki noted. Liwen Zhang and Kari Green-Church of Ohio State’s Campus Chemical Instrument Center/Mass Spectrometry and Proteomics Facility are additional co-authors of the study.
Krzycki led one of the two teams of Ohio State researchers that discovered pyrrolysine in 2002. The teams have since synthesized the amino acid and shown how bacteria incorporate it into proteins.
“That left some big questions unanswered: How do you make pyrrolysine? Where does it come from? What metabolic pathways does it come off of? Because it’s got to be generated within the cell that uses it,” Krzycki said.
The chemical shape of pyrrolysine offered some clues. Its carbon skeleton resembles that of lysine. But it also has an unusual ring on one end, and a methyl group attached to it, which for researchers raised questions about its origin.
The researchers also knew from their previous work that three genes are required to generate the instructions for the assembly of proteins that contain pyrrolysine – pylB, pylC and pylD. So the enzymes produced by those three genes had to have a role in creation of the amino acid. Finally, previous attempts by other researchers to define its biosynthesis suggested that another amino acid, D-ornithine, was involved in pyrrolysine’s production.
So Krzycki and his colleagues set out to test that theory. Conducting all of their experiments in a strain of E. coli bacteria, commonly used to test biological functions, they combined lysine and D-ornithine molecules.
They found that this didn’t make pyrrolysine, but rather a molecule like pyrrolysine that was missing a key part; however, this molecule turned out not to be converted to pyrrolysine. This molecule also was formed without the involvement of pylB – a gene that could not be left out of the process that actually makes pyrrolysine.
With the mass spectrometry instead identifying lysine as the only precursor to pyrrolysine, the researchers then used genetics, mass spectrometry of intermediates and deduction to determine the order of enzymatic reactions that converted two lysine molecules into the pyrrolysine amino acid.
They determined that the sequence of events matched the alphabetical order of the three involved enzymes: PylB uses lysine to make a D-ornithine-like intermediate, PylC joins the two lysine molecules together, and that feeds a reaction involving PylD that results in the formation of pyrrolysine. The reactions showed how the ring on pyrrolysine’s end, its major identifying characteristic, is formed.
“If you splay out the pyrrolysine molecule, you can recognize that in fact it looks a lot like lysine, except that to get to this ring, you have to make the second molecule one carbon unit shorter,” Krzycki said. “The lysine goes through a type of enzymatic reaction called a mutase reaction, where the carbon skeleton is rearranged to make this shorter molecule, which is like D-ornithine, but with one extra carbon now hanging off the chain in a new place. That’s what one of our pyrrolysine biosynethetic enzymes, PylB, is doing.”
Krzycki noted that this finding will add fuel to discussions of how the genetic code evolved. For example, the co-evolutionary theory suggests that amino acids arising from a common precursor have similar codon assignments. Codons are three-letter “words” identifying the bases that DNA uses to specify particular amino acids as building blocks of proteins. Normally, codons signal the start or end of a protein, or a particular amino acid used to construct it.
“For the scientists who are devoted to exploring how the genetic code evolved, our data provides new insights that can feed the various theories for how the code evolved; the co-evolutionary theory is just one such example,” Krzycki said.
The finding that pyrrolysine derives entirely from lysine means that pyrrolysine is part of the aspartic acid family in bacteria and Archaea, a group of single-cell microorganisms that are similar to bacteria in size and shape, but with a different evolutionary history. The microbes known to contain pyrrolysine are in the Archaea domain, and are able to convert a common class of compounds – the methylamines – into methane gas.
This work was supported by grants from the National Institutes of Health and the U.S. Department of Energy.Contact: Joseph Krzycki, (614) 292-1578; email@example.com
Emily Caldwell | Newswise Science News
For a chimpanzee, one good turn deserves another
27.06.2017 | Max-Planck-Institut für Mathematik in den Naturwissenschaften (MPIMIS)
New method to rapidly map the 'social networks' of proteins
27.06.2017 | Salk Institute
An international team of scientists has proposed a new multi-disciplinary approach in which an array of new technologies will allow us to map biodiversity and the risks that wildlife is facing at the scale of whole landscapes. The findings are published in Nature Ecology and Evolution. This international research is led by the Kunming Institute of Zoology from China, University of East Anglia, University of Leicester and the Leibniz Institute for Zoo and Wildlife Research.
Using a combination of satellite and ground data, the team proposes that it is now possible to map biodiversity with an accuracy that has not been previously...
Heatwaves in the Arctic, longer periods of vegetation in Europe, severe floods in West Africa – starting in 2021, scientists want to explore the emissions of the greenhouse gas methane with the German-French satellite MERLIN. This is made possible by a new robust laser system of the Fraunhofer Institute for Laser Technology ILT in Aachen, which achieves unprecedented measurement accuracy.
Methane is primarily the result of the decomposition of organic matter. The gas has a 25 times greater warming potential than carbon dioxide, but is not as...
Hydrogen is regarded as the energy source of the future: It is produced with solar power and can be used to generate heat and electricity in fuel cells. Empa researchers have now succeeded in decoding the movement of hydrogen ions in crystals – a key step towards more efficient energy conversion in the hydrogen industry of tomorrow.
As charge carriers, electrons and ions play the leading role in electrochemical energy storage devices and converters such as batteries and fuel cells. Proton...
Scientists from the Excellence Cluster Universe at the Ludwig-Maximilians-Universität Munich have establised "Cosmowebportal", a unique data centre for cosmological simulations located at the Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences. The complete results of a series of large hydrodynamical cosmological simulations are available, with data volumes typically exceeding several hundred terabytes. Scientists worldwide can interactively explore these complex simulations via a web interface and directly access the results.
With current telescopes, scientists can observe our Universe’s galaxies and galaxy clusters and their distribution along an invisible cosmic web. From the...
Temperature measurements possible even on the smallest scale / Molecular ruby for use in material sciences, biology, and medicine
Chemists at Johannes Gutenberg University Mainz (JGU) in cooperation with researchers of the German Federal Institute for Materials Research and Testing (BAM)...
19.06.2017 | Event News
13.06.2017 | Event News
13.06.2017 | Event News
27.06.2017 | Power and Electrical Engineering
27.06.2017 | Information Technology
27.06.2017 | Physics and Astronomy |
There is nothing new about climate change. For hundreds of millions of years the Earth’s temperature has been influenced by continental shifts, which have triggered volcanic eruptions among other things. Sometimes these shifts released large volumes of CO2 which heated up the Earth. They also caused young rocks to rise to the surface, which chemically bound CO2. As a result, CO2 was dispelled from the atmosphere in the longer term.
Today, natural phenomena still make a deep impression on the climate. Take, for example, El Niño, which occurs at intervals of three to seven years. When the trade winds ease, the warm water from the Western Pacific (Indonesia, Philippines) moves east and causes a rise in sea temperature in an area west of Peru. This occurrence creates worldwide deviations in cloud patterns, precipitation and temperature.
So, the causes of climate change are many and varied. And the effects on our climate system are complex.
Influence of humans
Humans have been influencing the climate since the start of the Industrial Revolution. Since then, the average world temperature has risen by approximately 0.8 degrees Celsius. In North-West Europe (including the Netherlands) the average temperature has risen by 1.5 degrees. The sea level has risen by around twenty centimetres and most of the glaciers have shrunk dramatically.
Up to 1950 the influence of nature was more important than human influence. After that, the pattern in the average world temperature can only be explained by factoring in the human influence.
Even so, a slight decline in temperature did appear from the mid-1940s to the mid-1970s. It was linked to a dramatic increase in cooling aerosols from the post-war industrialisation in the western world. It was also caused by a mild decline in solar activity and some major volcanic eruptions in the second half of this period.
According to the latest IPCC report, it is more than likely (more than 90 per cent probability) that most of the global warming in recent decades is attributable to the observed increase in greenhouse gases.
CO2 and climate change
The most well-known and the most important greenhouse gas is CO2. The concentration of CO2 in the atmosphere is subject to variation even without human intervention. The carbon cycle causes an exchange of CO2 between the biosphere and the oceans on the one hand and the atmosphere on the other.
Vast amounts of CO2 are also released by the burning of fossil fuels. There is incontrovertible evidence that the CO2 concentration in the air has never been so high in 800,000 years (probably even 60 million years) as it is now. The trend suggests that CO2 emissions will continue to rise globally, although the economic crisis did prevent a rise in 2009. The Netherlands (per head of population) is high on the list of CO2 emitters in the world.
Besides CO2 , methane (CH4), nitrous oxide (N2O), fluorinated gases, ozone (O3) and water vapour are important greenhouse gases. Water vapour plays a unique role as it strengthens the heat-trapping effect caused by other greenhouse gas emissions. This is because a warmer atmosphere retains more water. The amount of water vapour cannot be artificially increased or decreased.
Aerosols are less well-known than greenhouse gases. Aerosols are dust particles which, in addition to CO2, are released into the atmosphere in large quantities when wood and fossil fuels are burned. Some aerosols have a cooling effect on the climate, others have a warming effect. On balance they have a cooling rather than a warming effect, but no-one can give a clear idea of the magnitude, because we still do not understand how aerosols influence the occurrence and characteristics of clouds.
Natural phenomena, greenhouse gases and aerosols create an imbalance in the incoming and outgoing radiation in the atmosphere. This process is known as radiative forcing. When the Earth heats up, the short-wave radiation from the sun that enters the atmosphere is greater than the long-wave radiation that exits the atmosphere. The temperature changes on Earth will not stop until the radiation balance is restored. Given the immense capacity of oceans to absorb heat, it will take a long time to strike a new balance.
The extent of global warming in the future is swathed in uncertainty; first, because we have no idea of how much of an increase to expect in greenhouse gases (depending on economic growth), and secondly, because we do not know exactly how our climate system will respond (climate sensitivity). |
By Sally Greenberg, NCL Executive Director Today as we celebrate the life of Martin Luther King, Jr. it’s helpful to look around and see where we are in 2012 in the battle against racism and the poverty that is a direct byproduct of racism. I recently heard an astounding statistic: the United States imprisons more Black men today - often for nonviolent drug offenses - than were enslaved in 1850 before the Emancipation Proclamation. A historical look back is helpful. NCL’s founding in 1899 dates back to the Progressive Era, which was a time of historic reforms in America, but also a time of incredible backlash against former slaves and freedmen and women. Southern governments imposed a wide range of Jim Crow laws – laws and policies requiring Blacks to use different public facilities, live in different neighborhoods and go to different schools, during the Progressive era, often using the rationale that segregation resulted in a more orderly, systematic electoral system and society. Many of the steps that had been taken toward racial equality during the Reconstruction period were undone. The result is that Blacks were denied access to decent schools, housing, and good jobs that paid a living wage. The founding of the NAACP was precipitated by this series of events. The Jim Crow practices of Southern leaders were regrettably given the blessings of the American judicial system, as in the famous case upholding the principle of racial segregation in the U.S. Supreme Court in Plessy v. Ferguson (1896). Plessy found that as long as blacks were provided with "separate but equal" facilities, Black and White segregated schools were acceptable. The problem is, they weren’t equal at all. They were inferior school facilities. Black leaders were divided on how best to respond cases like Plessy. Booker T. Washington urged that blacks should not actively agitate for equality, but should acquire craft skills, work industriously, and convince whites of their abilities. W. E. B. Du Bois insisted instead (in The Souls of Black Folk, 1903) that black people must ceaselessly protest Jim Crow laws, demand education in the highest professions as well as in crafts, and work for complete social integration. They didn’t like each other much, and their enmity grew. DuBois, who was close to Florence Kelley, NCL’s leader for our first 33 years, was the driving force behind the formation of an organization to fight for the rights of Black Americans. In 1909 the National Association for the Advancement of Colored People (NAACP) was founded to advance these ideals and Florence Kelley was involved in these early gatherings and today continues to be a vital and critically important organization as does the NCL. Indeed, as I listened this week to financial guru Suze Orman, Harvard Professor Cornell West and media personality Tavis Smiley continue on their Poverty Tour of America this week, I was flanked on either side by friends from the NAACP. As we continue to work together to battle poverty and racism here are some stark statistics to contemplate:
- the black unemployment rate is twice that of whites.
- the average Black family’s household income fell 3 percent from 2009 to 2010, while white and Latino income fell only 1.7 and 2.3 percent, respectively.
- While poverty rates for all ethnic groups were in the double digits in 2010, the African-American community was faring the worst, by far. More than one in four Black Americans is now living below the poverty line.
- The economic gains made by African-Americans since the end of World War II and into the aughts have now been mostly decimated. Beyond that, the longer people are unemployed and poor, the less likely they are to be able to take advantage of educational opportunities, and the more likely the are to fall into bad habits |
This lesson plan will explore the wide-ranging debate over American slavery by presenting the lives of its leading opponents and defenders and the views they held about America's "peculiar institution."
Popular sovereignty allowed the settlers of a federal territory to decide the slavery question without interference from Congress. This lesson plan will examine how the Kansas-Nebraska Act of 1854 affected the political balance between free and slave states and explore how its author, Stephen Douglas, promoted its policy of popular sovereignty in an effort to avoid a national crisis over slavery in the federal territories.
This lesson plan will explore Abraham Lincoln's rise to political prominence during the debate over the future of American slavery. Lincoln's anti-slavery politics will be contrasted with the abolitionism of William Lloyd Garrison and Frederick Douglass and the "popular sovereignty" concept of U.S. Senator Stephen A. Douglas. |
How to Find and Observe the Garradd Comet
If you haven't seen one, a comet is one of the most spectacular astronomical objects in the sky, partially because it is so close to Earth. At the closest, it is only 1.3 a.u. (194,477,400 kilometers) away from Earth. Comet C/2009 P1 Garradd was discovered by Gordon J. Garradd on August 13, 2009. It never comes closer to the sun than Mars's orbit. Usually, a comet moves fast, but it has stopped moving so fast recently, making it really easy to observe. It can be observed by a telescope or with binoculars that are at least 10x50. Here is a picture of the comet:
Now, to actually find the comet.
Step 1 Gather Your Tools
Here are some tools you need to observe the comet, though some of them are optional.
- Binoculars or a telescope—your binoculars should be at least 10x50 or the comet will appear too fuzzy.
- A tripod—to hold your binoculars.
- A binocular observing attachment for your tripod—this will stabilize the binoculars while observing.
- A chair or stool—for if you're observing with binoculars, it is tough to sit down while looking through a telescope.
This is what the ideal binocular setup should look like:
Step 2 Find an Observation Spot
You will need a decent place to observe because the Garradd comet is pretty faint at only sixth magnitude. You want a place that is very far away from any big cities and about a mile from any commercial buildings. Also, you want to avoid being close to busy streets because the headlights of the cars will mess with your pupil size, causing you to not be able to see dim objects.
Step 3 Search for the Comet
You need to find a few constellations and star-hop to get to Garradd. It is much easier to observe this in the evening, because the comet is higher up from the horizon. Also, if you have a planisphere this is very easy to do. Remember that if the moon is too bright, the comet will be very difficult to view. First, you need to locate the Summer Triangle, which is almost a right triangle made up of the stars Vega, Deneb, and Altair. You can easily see this with the naked eye, because all of the stars in this constellation are third magnitude. You will need to locate Vega, which is on the lower left. Here is a diagram:
Down and to the right of Vega is Hercules, which is where Garradd will be around. Here is a diagram of Vega and Hercules:
The diagram shows where Garradd began. It has moved and will continue moving until it is not visible. Here is a very detailed diagram of how the comet will move:
Now you just have to go to the current date on the chart and observe! Try to take pictures and post them to the community corkboard! |
(Image Credit: NASA (assumed), via ITECS Insider)
Aside from government politics, space radiation is one of the biggest threats to humans seeking to leave our home world.
Unless we find a way to protect ourselves, humanity will only be able to settle upon only a few worlds within our star system.
As shown in the video below, scientists are attempting to find innovative ways to counteract radiation’s effects, as failure to do so can result in a few dead astronauts.
Scientists are currently working on ways to deal with radiation via medicine, nano particles and portable magnetic fields, as well as mapping out “safe havens” (i.e. off world caves on the Moon and Mars).
Thus far our closest neighbor has some temporary protection thanks to Earth’s magnetic field, although hopefully we can come up with a more permanent solution aside from just settling Jupiter’s Callisto and Saturn’s Titan.
(Hat Tip: Spaceports) |
Bones are structures made out of calcium carbonate that form the general structure of the body, known as the skeleton. Despite being very hard and inflexible, they too are living tissue. They can support large loads in compression, but lesser loads in tension and shear. As such, most broken bones are the result of striking the bone at an angle perpendicular to the way the bone runs, or by stressing it in two different directions at once.
Despite their strength, bones are also very light as they are made up out of a honeycomb structure where most of the interior of the bone is made up of fine mineral structures. The inside of most bones contains bone marrow, which produces the cells in blood.
Some diseases, such as osteoporosis, are the result of the loss of bone mass, making bones very thin, brittle and susceptible to breaking.
All the bones in the body have a name, and medical students are required to memorize these as part of their education. Many doctors resort to mnemonics in order to remember the bones in certain fine structures.
Adults have fewer bones than children as many bones, particularly those in the skull, fuse together into a single structure as a person becomes older. |
The U.S. Geological Survey has released new research about water use on the Rio Grande in the United States. The research may help water managers along the Rio Grande make wise decisions about how to best use the flow of a river vital for drinking water, agriculture and aquatic habitat. The studies also show how conditions from the prolonged drought in the West have affected the Rio Grande watershed.
The Rio Grande forms the world’s longest river border between two countries as it flows between Texas and Mexico, where it is known as the Rio Bravo. The river runs through three states in the U.S., beginning in southern Colorado and flowing through New Mexico and Texas before it forms the border with Mexico.
Parts of the Rio Grande are designated as wild and scenic, but most of the river is controlled and passes through several dam and reservoir systems during its 1,896 mile journey to the Gulf of Mexico. The river is managed through a complex system of compacts, treaties, and agreements that determine when and how much water is released along the river’s length.
The amount and timing of water releases have varied in recent years due to drought. Recent USGS research on the middle Rio Grande looked at the effects of those changes on the amount of salts that build up in the Rincon and Mesilla Valleys in Texas and New Mexico. Results showed a decline in the amount of salt carried by the river due to a decrease of releases during the drought.
The two valleys responded differently to the decreased releases. Salt levels in the Rincon Valley declined, whereas salt levels in the Mesilla Valley increased. Salt buildup in the soil and water can affect agriculture, which is an important industry in those valleys.
Successfully managing water use along the river is important to the sustainability of agricultural and communities along the river. To help with that goal, USGS has measured water gains and losses to the Rio Grande from between the Leasburg Dam near Leasburg, New Mexico, and the American Dam near El Paso, Texas. American Dam is near where the Rio Grande becomes the border with Mexico.
For the past several years, drought conditions contributed to decreasing flows along this 64-mile stretch, and sections of the river were dry during parts of the year.
Flow in the Rio Grande is affected by how water is used throughout the basin. For instance, the Albuquerque area of New Mexico has two principal sources of water: groundwater from the underlying aquifer system and withdrawals and diversions from the Rio Grande.
From 1960 to 2002, pumping from the aquifer system caused groundwater levels to decline from about 40 feet along the Rio Grande in Albuquerque to more than 120 feet in the valley away from the river.
As a result, the USGS, in cooperation with the Bureau of Reclamation, conducted a study to understand the exchange of water between the Rio Grande and the aquifer system.
By characterizing the interaction between surface water from the Rio Grande and groundwater from the aquifer system, scientists provide valuable information to help managers make informed decisions about water use.
USGS scientists are also studying how native fish and their aquatic habitats are affected by different streamflow conditions along the river. For example, previous investigations have shown that the decline in Rio Grande silvery minnow may be attributed to modifications of the natural streamflow regime, channel drying, construction of reservoirs and dams, stream channelization, declining water quality, and interactions with nonnative fish.
source: U.S. Department of the Interior, U.S. Geological Survey |
Download this lesson plan about historical evidence that tells us about the Stone Age. The lesson contains a two-page plan and two pupil resource sheets. The plan is designed for Year 3/4 and is part of the popular Stone Age to Iron Age Resource Pack recommended by TES.
This Stone Age lesson addresses the question of how we can know about a period of British history with a lack of written primary sources of evidence. Pupils will be introduced to the concept of making deductions from evidence and they will fill in a table to show what information different artefacts give us about the Stone Age. |
CBT Case examples
Basic Principles of CBT
The basic principle of CBT is that the way we think in specific situations affects how we feel emotionally and physically, and alters our behaviour.
Everyone will have their own, individual response to a particular event. The key to CBT is to identify the most important thoughts, feelings and behaviour that make up these reactions and decide whether these responses are rational and helpful.
CBT helps people to understand their problems as well as offering techniques which enable people to learn to make changes in each of these areas, which leads to an improvement in emotional symptoms and empowers people to live fulfilling lives according to their own values and needs.
Differing reactions to the same event:
Imagine that you have cooked dinner for a friend, who is usually very reliable. An hour after she was due to arrive, there is still no sign and you have received no phone call.
How would you react to this?
Look at the following chart, which shows a variety of possible reactions to the same event:
“How dare she do this to me! She is so inconsiderate and rude!”
“She probably didn't want to come because she doesn't really like me. I'm such a loser.”
“What if she's had an accident? She could be seriously hurt.”
“I expect she's stuck in traffic. At least I have extra time to prepare dinner”
Tell her off or act chilly when she arrives
Withdraw from people and stop asking them over
Phone local hospitals
Continue preparing dinner
No individual reaction is right or wrong. However, the way people react to events can often worsen their lives as a vicious cycle . For example, if someone feels depressed, they react by withdrawing from others, which only worsens their mood further. By identifying whether these reactions are helpful or unhelpful in achieving specific life goals, people can make choices about how to respond to different circumstances. |
I am constantly encouraging students to "draw what you see, not what you know." This lesson was an exercise in drawing from observation. We got up close with our pinecones and looked for patterns and repeating shapes. We tried to capture interesting views of the pinecones as we made the smallest details the focal points of our drawings. Third and fourth graders worked carefully in pencil and Sharpie marker before painting their works using value scales with liquid watercolors.
Monday, December 8, 2014
Wednesday, December 3, 2014
Dutch artist Piet Mondrian was well known for his artwork made up of straight black lines and primary colors. He created grids using vertical and horizontal lines.
"I construct lines and color combinations on a flat surface, in order to express general beauty with the utmost awareness. Nature (or, that which I see) inspires me, puts me, as with any painter, in an emotional state so that an urge comes about to make something, but I want to come as close as possible to the truth and abstract everything from that, until I reach the foundation (still just an external foundation!) of things…
I believe it is possible that, through horizontal and vertical lines constructed with awareness, but not with calculation, led by high intuition, and brought to harmony and rhythm, these basic forms of beauty, supplemented if necessary by other direct lines or curves, can become a work of art, as strong as it is true." - Piet Mondrian, (Netherlands, 1872-1944)
First and second grade artists were inspired by Mondrian's artwork. We used black paper strips to create the grids, then painted the rectangles and squares with the primary colors. |
This post was written by shopkeeper Sprouting New Beginnings.
As teachers we have discovered that using American Sign Language (ASL) in our classroom is not just for pre-verbal children and children who are deaf or hard of hearing. It is a wonderful strategy to support literacy in an elementary classroom. In our classes we utilize ASL to help children learn to spell, expand their developing vocabulary, and strengthen their reading skills.
ASL supports their development in literacy and in math by using a kinesthetic approach to learning. ASL is used as a memory aid to increase learning. My children Say it, See it, and Do it! This supports brain development and multiple intelligences.
Some ways that we use ASL in the classroom are:
- For positive guidance
- Learning our letters and numbers
- Emphasizing vocabulary
- Spelling sight words, color words, seasons, and days of the week
- Classroom management
We also use ASL in music, while reading books, writing, and play. Children love learning ASL! They find that it is fun and easy and we find that it increases their level of engagement, which decreases behavioral challenges.
If you haven’t embraced this gift and passed it on to the children in your life then make it one of your new year’s resolutions to bring in the vocabulary of ASL into your classroom. |
Electrons are shared in a covalent bond when each of the participating atoms has roughly the same ability to attract electrons. The more evenly the two atoms are able to pull the participating electrons towards themselves, the more evenly the electrons share their time around each atom.Continue Reading
Covalent bonds are often formed between two identical elements because each element has the same ability to attract covalent electrons. This ability to attract electrons is called electronegativity, and has a numerically assigned energy value, usually in electron volts. The closer two covalently bonded atoms are in electronegativity, the more covalent the bond is. With the exception of identical atoms, all other covalently bonding atoms have different electronegativities, and thus cannot form purely covalent bonds.
As the difference in electronegativity increases, the electrons spend more time around the more electronegative atom, imparting it with a partial negative charge, while the less electronegative element gains a partially positive charge. When the difference in electronegativity is sufficient to allow the more electronegative element to take the electrons of the less electronegative one, an ionic bond is formed. A covalent fraction can be calculated for two dissimilar elements, indicating how covalent or ionic their bond would be.Learn more about Atoms & Molecules |
Potassium is a mineral found in many foods that aids your body in balancing fluids. It works in conjunction with sodium. Potassium is also responsible for proper muscle contraction and the muscles' resting phase after physical exertion. Levels of this mineral should be kept steady and not drop too low or go too high. Extreme highs or lows of potassium levels can be fatal. Watching your diet carefully is important. Most of our potassium intake comes from our diet and is found in fruits and vegetables. Additives and foods can block our body's absorption of potassium.
Potassium works with the sodium in your body to keep water retention to a minimum and to keep your body well hydrated. According to the George Mateljan Foundation, most potassium in your body is stored inside your cells while most of the sodium in your body is stored in the fluid that surrounds your cells. Too much sodium can block potassium absorption and raise fluid levels, causing swelling of the body. Salt is the main form of sodium taken in on a daily basis by many people. Canned processed vegetables and fruits also contain high amounts of sodium and should be kept at a minimum or avoided all together. Snack foods, such as chips and cookies, contain loads of sodium and should only be consumed in moderation. Get into the habit of reading labels when grocery shopping to help you determine if the convenience of canned processed foods are worth the risk.
Caffeine can be found in teas, colas, coffee and candy. Consuming these items in large quantities can cause your potassium levels to drop. Caffeine inhibits absorption of potassium by "clogging" the kidneys and liver. Your liver is responsible for releasing potassium into your bloodstream after it has filtered it. Your liver can be seen as a large filter that strains bad bacteria from the good minerals that your body needs. Bad bacteria goes to waste; good minerals get distributed equally. If you take in large amounts of caffeine, you are clogging the filter. The good minerals cannot get through, causing deficiencies.
Drinking alcohol in excess thins your blood and causes dilution of the potassium in your bloodstream. Alcohol also dehydrates the body and blocks absorption of potassium. Again, you must consider the liver because alcohol has direct effects on this organ. Excessive drinking can cause strain on the liver increasing the risk for disease, such as cirrhosis of the liver. When strain is put on your liver, its function slows tremendously. Your body will not get steady amounts of the minerals it needs to survive. Slowing the liver can also cause immune system sluggishness, which can cause sickness, further harming potassium levels in the bloodstream. |
The first successful landing on the Moon was achieved by the Luna 2, a Soviet spacecraft; the Luna 2 reached the surface of the Moon in 1959. While on the Moon, the Luna 2 captured photographs, eventually returning to Earth, also bearing the distinction of retrieving the first images from the Moon. Following the return of the Luna 2, many Moon space missions ensued; the United States boasted a second successful Moon mission, launching the Ranger 4 to space in 1962.Continue Reading
The Soviets welcomed the arrival of the Luna 2 on the Moon with relief; this spacecraft made its way to the Moon following the unsuccessful landing attempts of its earlier cousin, the Luna 1. The Luna 1, also a Soviet spacecraft, headed toward the Moon earlier in 1959. It never reached its destination, but nevertheless recorded valuable scientific data. The Luna 1, equipped with various scientific instruments, revealed a lack of magnetic field around the Moon.
The Luna 2 featured the same shape and electrical design as the Luna 1. Both had cone shapes and contained complex instrumentation, including magnetometers and meteorite detectors. The Luna 2 followed a straight trajectory path towards the Moon. After liftoff, it reached its destination 36 hours later, making its landing on September 13, 1959.Learn more about Space Travel |
The English won the naval battle handily, aided by some fortuitous inclement English Channel weather, and emerged as the world's strongest naval power, setting the stage for later English imperial designs. Elizabeth was a master of political science. She inherited her father's supremacist view of the monarchy, but showed great wisdom by refusing to directly antagonize Parliament. She acquired undying devotion from her advisement council, who were constantly perplexed by her habit of waiting to the last minute to make decisions (this was not a deficiency in her makeup, but a tactic that she used to advantage). She used the various factions (instead of being used by them), playing one off another until the exhausted combatants came to her for resolution of their grievances. Few English monarchs enjoyed such political power, while still maintaining the devotion of the whole of English society (Elizabeth I (1558-1603 AD) a Queen with the Heart of a King).
From there, she remained a perfect leader during her time period and even in today's society, people only remember her successes and not failures, which makes her the great leader of all time
Unfortunately, Queen Elizabeth did not please everyone during her reign, which led to her downfalls. When the Catholics decided not to support her anymore, it led to more downfalls for her and the country.
The first decade of Elizabeth's reign found the Catholics relatively quiet and content. They were settled mainly in the north and west of England, and accepted the 1559 religious settlement. They believed Elizabeth to be illegitimate and thus ineligible to be queen, but neither Pope Paul IV or his successor, Pius IV, seriously challenged her title. She was not even excommunicated until 1570. The two greatest European powers, Spain (the Hapsburg Empire) and France, were cautious but friendly. England had long been a balance between their competing interests. And as mentioned earlier, Philip II of Spain had even sought to marry Elizabeth. For her part, the queen took care not to disturb calm waters.
It seems that her decision not to marry did not make Queen Elizabeth the best leader. When she decided this, it led to her having a lot of people against, people that could affect her reign. It may be believable that since her reign was calm, it appeared too calm by she had to chose sides, which indicated trouble was hidden very well.
Europe was caught in bloody religious turmoil. There was a Protestant rebellion in the Netherlands and Philip II sent the duke of Alva to crush it. There was now a massive military power directly across the Channel from England. Elizabeth's council could only wonder - once Alva's force completed its bloody business there, would he then look to England? And that same year, Mary Stuart fled her disastrous reign in Scotland to seek Elizabeth's help. She needed an army to recover her throne from Protestant rebels who had forced her abdication and imprisoned her. Elizabeth and her councilors were aghast. Mary was the true queen of England in the eyes of Catholic Europe, as well as some Catholic Englishmen. And she was now in England, on her way to becoming the greatest quandary of Elizabeth's reign. Just as Elizabeth had been the inevitable focus of conspiracies and plots against Mary I's rule, Mary queen of Scots would be the focus of discontent against Elizabeth. And if Elizabeth should die, naturally or otherwise, Mary had the strongest claim to the English throne. All of the Protestant councilors were terrified
Since her peaceful coming to the end, her flaws were showing in trying to keep everyone happy, which is not a good leadership quality. This is because a balance cannot be maintained in a such manner.
For the queen, her cherished and precarious balance, successfully maintained for a decade, was falling to pieces. She took the precaution of imprisoning Mary queen of Scots in a variety of secure castles. At first, this 'imprisonment' was little more than an inconvenience since Mary wished to return home. She sincerely believed Elizabeth would help her, as a fellow queen and cousin. She never recognized the political danger she brought to bear upon her 'sweet sister'. Elizabeth was told by the Protestant lords in Scotland that Mary was unwelcome; she faced certain death if she returned. Her infant son (whose birth caused Elizabeth to exclaim, 'Alack, the Queen of Scots is lighter of a bonny son, and I am but of barren stock!') was now king. The Scots also plied Elizabeth's council with evidence of Mary's complicity in her second husband's murder. Would the queen of England lend her support to such a woman? It was indeed a vexing problem. Elizabeth settled upon appointing a commission to investigate the charges against Mar (Elizabeth 1).
Even though she tried to keep the peace within the country, she did not keep the peace within her family. If she was a good leadership, she would have kept the peace with her cousin and live by example.
Elizabeth was always of two minds regarding her cousin. She recognized the danger which Mary represented, but she was acutely conscious of Mary's status as a sovereign queen unlawfully deposed by her subjects. She could not impugn her cousin's dignity without risking damage to the ideal of royal prerogative. The trick was to deprive Mary of her standing as a sovereign. Mary's own behavior, in Scotland and England, gave Elizabeth a distinct advantage. Even staunch Catholic allies were troubled by Mary's reported crimes. Perhaps she was innocent of complicity in her second husband's murder, but she had married James Hepburn, the earl of Bothwell in a Protestant ceremony. And the evidence of the 'Casket Letters' (now believed to be false) supported the theory that Mary and Bothwell had an adulterous affair and then plotted Darnley's murder. This erosion of Mary's reputation necessarily alienated her moderate supporters. But for the extremists, such flaws could be overlooked for the greater good of overthrowing the heretic Elizabeth (Queen Elizabeth a good leader).
Despite her downfalls, she was an excellent leader, which makes her very remembered. 'Proud and haughty, as although she knows she was born of such a mother, she nevertheless does not consider herself of inferior degree to the Queen, whom she equals in self-esteem; nor does she believe herself less legitimate than her Majesty, alleging in her own favour that her mother would never cohabit with the King unless by way of marriage, with the authority of the Church.
She prides herself on her father and glories in him; everybody saying that she also resembles him more than the Queen does and he therefore always liked her and had her brought… |
Last week, I discussed how biodiesels carry promise as a transitional fuel in a more environmentally conscious economy. However, biodiesels do not seem a viable long-term alternative to gasoline or diesel due to its difficulty in fulfilling demand, not to mention the fact that it can only mitigate emissions—not eliminate them. There is another technology that could prove to be a fitting long-term solution, and that technology is the hydrogen fuel cell.
Fuel cells are similar to batteries, as they convert chemical energy into electrical energy. A fuel cell consists of a conducting material (electrolyte), pressed between two catalysts: an anode and a diode. In fuel cells which run on hydrogen (which are not the only kind of fuel cell), the assembly pumps hydrogen gas over the anode and oxygen (usually from air) over the diode. The anode strips the electrons from the gaseous hydrogen, separating it into bare, positively charged hydrogen nuclei and free electrons. The electrolyte conducts the electrons away—producing an electrical current—while pumping the bare nuclei into the diode. With the aid of the diode, the fuel cell oxidizes the hydrogen to create water, which is flushed away as exhaust. (see How Stuff Works, c. 2013)
Hydrogen fuel cells have the potential to be more efficient than combustion engines, with 40-60% of the energy remaining after waste heat, as opposed to diesel’s 22%. (Wikipedia, 2013) In practice, this gap is narrower, but the technology shows promise. Furthermore, since the reaction produces water as waste instead of CO2, carbon monoxide, nitrous oxide or the other pollutants that result from combustion engines, the harmful emissions from a car (or factory) running on hydrogen would be zero.
A large number of technological hurdles exist to making this technology affordable for individuals to use, from safe storage of hydrogen to cell durability to hydrogen production. The last of these is the most pressing issue, as hydrogen can be difficult to produce in a manner that is efficient, economical and ecologically sound. The standard method involves a reaction between steam and methane, which produces substantial amounts of carbon dioxide. (see Wikipedia, 2013) Needless to say, this quite nearly defeats the purpose of using hydrogen as a fuel in the first place.
However, all of these technological issues appear surmountable. The Department of Energy’s fuel cell initiative appears to have made some progress in reducing the cost of producing fuel cells, and private companies seem to have fuel-cell automobiles ready for production. The traditional process of producing hydrogen from steam reforming can be made more “green,” in theory, by sequestering CO2 byproducts. Alternatives to steam reforming abound, including the development of an artificial leaf that uses solar energy to split water into hydrogen and oxygen.
The hard sciences give us reason to hold an optimistic view of the utility of hydrogen fuel cells. It is economics—the dismal science—which gives us reason to cringe. The main reason we have yet to find hydrogen cars roaming the streets is infrastructure. The Department of Energy's progress report indicates the existence of twenty-five hydrogen fuel stations in the United States. As with biodiesel, the question isn’t “Can we make a fuel cell?” It is simply: “Where do I fuel up?”
And there, ladies and gentlemen, lies the rub. |
At the center of the Milky Way Galaxy, there is a supermassive black hole. This is normal — at the center of almost every galaxy there is a black hole that is millions or billions times heavier than our sun. As galaxies move through the universe, they occasionally collide (the Milky Way is currently colliding with the Sagittarius Galaxy) and over a long time 1, eventually the cores of these galaxies crash into each other, resulting in two supermassive black holes orbiting one another. The two black holes will get closer together, orbiting faster and faster as they fall into one another. This phenomenon should be common, based upon how frequently galaxies collide, however, we have never detected any close orbiting supermassive black holes. Due to this lack of evidence, there has been much debate in the scientific community regarding if it were even possible to for black hole orbits to be smaller than 1 parsec (about 3.25 light years) 2 which is about the resolution of our modern telescopes.
That is, until now. The research done by scientists at Columbia University has confirmed the existence of a two supermassive black holes in orbit separated by 0.007 – 0.017 parsecs (8 – 20 light days)! It is difficult to detect this phenomenon by observation, because this occurs at the centers of colliding galaxies. The distances between galaxies is quite far, so our telescopes cannot see the separation between two supermassive black holes that are orbiting each other from one very massive supermassive black hole.
So how did they do it? Using quasars! Quasars are the dense region of hot gasses and materials surrounding a supermassive black hole. As the large amount of gravity surrounding a black hole attracts things to it, the material and gasses it attracts get shredded and fall into the black hole. These materials end up piling up around the black hole, and begin to form a hot, shredded disk, known as a quasar. These disks radiate enormous amounts of energy and light and rotate around the black hole, creating a rotating disk of brightly emitting material.
Now, if each of the supermassive black holes that are rotating one another have their own quasar, the quasars will be orbiting each other as well. The quasars are so bright that we can detect them with our telescopes. It is still difficult to tell the two quasars apart in the sky. However, what can be seen are the changes in light emitting from what appears to be a single quasar.
The scientists took advantage of the flickering light from the quasars to try and tell them apart. From their observations, they decided it was either one quasar rotating obscenely fast, or, as they showed, is in fact two supermassive black holes, that are rotating in close proximity!
This is the first discovery of supermassive black holes orbiting each other this fast. It’s so fast that Einstein’s theories of relativity apply! It is an important discovery because it settles a conversation about the formation and collision of galaxies. Furthermore, this discovery introduces some good methods for identifying other quasars fluctuations resulting from orbiting supermassive black holes. It’s also reassures our hypothesis of the frequency of galactic collisions with the discovery of these close orbiting supermassive black holes!
- But do not fear, the spaces between stars are so high that as galaxies collide, the inhabitants of a planet would never notice. When galaxies collide, their cores get closer and closer together, but the stars and planets on the outskirts of a galaxy, like our solar system, will never collide with anything. All that will happen is that our night sky will start to look a little different over time. ↵
- The sun is about 8 light minutes from Earth, meaning it takes the light from the sun about 8 minutes to get to us. So if the sun disappeared right now, no one on Earth would notice for 8 minutes, therefore a light year is the distance that light would travel in a year. For perspective, Pluto is about 8 light hours away. ↵ |
As the Common Core State Standards are implemented in the classroom, the shift to higher-order thinking is on! One way that teachers themselves can adjust their teaching to accommodate new standards of learning is through the use of formative assessment. Formative assessment use by teachers can help them determine what their students are not learning, giving them the ability to adjust their teaching – in the moment – to move all learners forward.
Embedding formative assessment in everyday teaching is a proven way to improve classroom learning. We’ve blogged consistently about strategies and techniques and have generally sung its praises, but here are two research experiments that prove its success:
1. Informal and formal methods of collecting evidence of student understanding have been shown to enable teachers to make positive instructional changes. In 1989, Thomas Carpenter, Department of Curriculum & Instruction at the University of Wisconsin’s School of Education, and some fellow researchers examined one informal method by randomly assigning 20 first-grade teachers to participate in a month-long workshop. During this workshop, teachers focused on posing problems, questioning students regarding their problem-solving strategies, and listening to those strategies. (Using knowledge of children’s mathematics thinking in classroom teaching: An experimental study. American Educational Research Journal, 26(4), 499–531.)
When compared to teachers in the control groups, these teachers had a better understanding of their students’ abilities and were better able to predict performance. In addition, students in these classes outperformed their peers on a mathematics achievement test. The researchers hypothesized that the teachers’ ability to understand the processes that students were using may have helped them to adapt instruction; they would try different activities, resulting in higher achievement.
2. In 1991, Douglas Fuchs, Head of Special Education at Vanderbilt University, and some of his research colleagues investigated a more formal method of collecting information of student learning. These researchers trained teachers to use a set of curriculum-based formative assessments to systematically collect and use evidence of student proficiency. The researchers randomly assigned 33 teachers to a treatment and control group and found that teachers using the program made more instructional adjustments than those teachers who relied on informal classroom observations. (Effects of curriculum-based measurement and consultation on teacher planning and student achievement in mathematics operations. American Educational Research Journal, 28(3), 617–641.)
Systematically eliciting evidence of student learning day-by-day and minute-by-minute can provide invaluable information to teachers. By highlighting student thinking and misconceptions, and eliciting information from all students, teachers can collect representative evidence and therefore better plan instruction based on the current understanding of the entire class. |
A clinical trial is a research study in volunteer human subjects to determine the safety and efficacy of new treatments, screening methods, preventive techniques, or diagnostic methods for a disease. New devices, drugs, procedures, and medical innovations must be thoroughly tested to ensure that they are safe and effective for human patients. Human trials are only conducted after both laboratory and animal studies show promising results.
The ultimate goal of HIV/AIDS clinical trials is to find more effective and safer ways to diagnose, treat, and prevent HIV/AIDS.
Researchers develop ideas for clinical trials when they have a hypothesis about something. For instance, if there have been reports of pregnant HIV patients who received HIV treatment and did not pass the virus on to their babies, researchers may choose to investigate the safety and effectiveness of anti-HIV drugs (antiretrovirals) during pregnancy.
Before a clinical trial is conducted, a specific set of rules, called protocol, is developed.
A protocol includes information on types of patients who can participate in the trial, the schedule of tests, procedures, medications/dosages, and length of the study.
During a clinical trial, the researchers monitor the participants' health to determine the safety and effectiveness of treatment.
CLINICAL TRIAL PHASES
Phase I: In phase I clinical trials, researchers test a new drug or treatment in a small group of 20-80 patients for the first time. The goal is to evaluate the drug or treatment's safety, determine a safe dosage range, and identify side effects.
Phase II: Phase II clinical trials study the effects of a drug or treatment in a larger group of 100-300 patients. During this phase, researchers aim to determine the drug or treatment's efficacy and further assess its safety.
Phase III: During phase III trials, researchers study the effect of a drug or treatment in large groups of 1,000-3,000 patients. This type of trial is used to confirm the drug or treatment's effectiveness and monitor side effects. The drug or treatment is also compared to commonly used treatments and researchers collect information that will help ensure that the drug or treatment is used safely.
Phase IV: Phase IV clinical trials are performed after the drug or treatment has been marketed to the general public. These studies are conducted to collect information on the drug or treatment's long-term effects and side effects in various patient populations.
TYPES OF CLINICAL TRIALS
General: Many clinical trials have what is called a control group. A control group is a group of patients that serve as the basis of comparison when evaluating a new drug or treatment. The control group usually receives a placebo, which is an inactive drug (sometimes called a sugar pill) that has no treatment value. In some studies, the control group may receive the standard drug/treatment or no drugs/treatment.
The results of both groups of patients are evaluated. This allows researchers to determine whether the experimental drug or treatment actually had an effect on patients when compared to those who did not receive the experimental drug or treatment.
The gold standard for medical research is the double blind, randomized, controlled trial, which is described below.
Blinded study: During a blinded study, participants do not know whether they are part of the experimental or control group. Patients in the experimental group receive the drug or treatment that is being tested. Depending on the individual study, patients in the control group may receive a placebo or standard treatment.
Controlled trial: During a controlled trial, the experimental agent is compared to either a placebo (inactive therapy) or the standard effective treatment.
Double-blind study: During a double-blind study, neither the participants nor the researchers know which patients are receiving the experimental drug or treatment. This type of study is performed so neither the researchers nor the participants have expectations about the treatment that could influence the outcome.
Open label clinical trial: During an open label clinical trial, both the researchers and participants know which patients are receiving the experimental drug or treatment.
Randomized trial: If a trial is randomized, the participants are randomly selected to either be in the experimental group or control group. This helps ensure that subjects in both groups are comparable or as similar as possible.
Systematic review and meta-analysis: After several studies have evaluated a particular drug or treatment, researchers may conduct a systematic review or meta-analysis. These reviews provide summaries of the trials that have been performed to date.
MEASURING THE POWER OF CLINICAL TRIALS
General: The power of a trial refers to the ability of a study to produce statistically significant results. Studies with a large number of participants that last a long period of time are more likely to produce statistically significant results than small, short-term studies.
Statistically significant results: If the results of a study are statistically significant, this means the results are not likely to be the result of chance alone. In research articles, the authors use what is called the P value to indicate the chance that an observed result was due to chance. If the P value is lower than 0.05 (p<0.05), it is traditionally accepted as an indication of statistical significance. This means that the likelihood is less than five percent that the observed difference between study groups was simply the result of chance. For instance, if the majority of HIV patients taking antiretrovirals lived longer compared to HIV patients who received no treatment, and the results are statistically significant, the medication has been shown to be beneficial for HIV patients.
Study size and length: The more study participants and the longer the study, the more power the study has. Larger and longer studies generate more data than smaller, shorter studies. This makes it less likely that the results are simply due to chance.
REPORTING TRIAL RESULTS
Medical Journals: After a clinical trial is performed, the gold standard for presenting the results is publication in a peer-reviewed professional journal. These journal articles are reviewed by experts in the same field as the authors of the studies. Journal articles contain five major sections: abstract, introduction/background, design and methods, results, and discussion.
The abstract is a short summary of the research article. It includes the researchers' objective and goals of the study. It also briefly summarizes each section of the paper and includes the authors' conclusions.
The introduction/background section usually provides a statement that explains the issue that was investigated. It also explains why the study was performed and what the researchers hypothesized (expected to prove).
In the design and methods sections, which may be combined, the researchers explain how the study was carried out. This section includes detailed information about the study participants, the treatments used, the tests that were performed, and how the data was assessed.
The results section provides detailed information of all the data that was collected. This section typically includes graphs, charts, tables, and/or pictures, as well as a statistical analysis of the results.
In the discussion section, the authors explain their interpretations of the results. Here, researchers explain what the results mean and how they might affect clinical practice. Authors also explain whether their initial hypothesis was confirmed, as well as potential limitations of the study and suggestions for additional research.
Scientific conferences: In addition to medical journals, researchers may also present their findings at conferences related to their fields of study. Some of the most significant HIV/AIDS conferences include the annual Interscience Conference on Antimicrobial Agents and Chemotherapy (ICAAC), the annual Conference on Retroviruses and Opportunistic Infections, and the biannual International AIDS Conference.
Researchers submit their study abstracts before the conference. Lead authors of the most interesting or groundbreaking studies are presented. Additional studies are typically presented on posters. Depending on the size of the conference, abstracts from oral and poster presentations may be published and/or available online.
ENROLLING IN CLINICAL TRIALS
Who can participate: All HIV patients can volunteer to participate in clinical trials. However, each clinical trial has unique guidelines for who can participate in the study, called criteria. Patients interested in enrolling in a particular study must meet the criteria. This helps ensure the patient's safety and helps ensure that researchers are able to accurately prove or disprove their hypotheses. Factors that allow a patient to enroll in a clinical trial are called inclusion criteria, and factors that prevent a patient from enrolling are called exclusion criteria. Criteria may include or exclude patients based on factors such as age, medical history, gender, current medications, co-existing illnesses, and overall health.
Weighing the pros and cons: Participation in clinical trials is completely voluntarily, and the decision should only be made after the patient has carefully considered the potential health benefits and risks. This risks and benefits will be different for each trial and each individual patient. It is important for patients to consult their personal healthcare providers and family members before deciding whether or not to participate in a clinical trial.
Patients will meet with the researcher(s) before being enrolled in the study. This allows patients to ask any questions and address any concerns about participating. Patients should consider writing down questions ahead of time, asking a friend or family member to join them for support, and/or recording the discussion.
Participating in a clinical trial allows patients to take an active role in their healthcare. Participants gain access to new treatments that are not available to the public and participants help others by contributing to medical research. However, risks of participating in a trial may include side effects or adverse reactions, the treatment may not be effective, the trial may take up a lot of the patient's time, and participation may require hospital visits or involve complex treatment plans.
Safety: The federal government has guidelines and safeguards to protect participants in clinical trials. All clinical trials in the United States must be approved and monitored by an Institutional Review Board (IRB) to ensure that the risks are minimal and worth the potential benefits. An IRB is an independent committee that consists of physicians, statisticians, community advocates, and other professionals.
During the trial: The process of each clinical trial is different. The research team generally includes doctors, nurses, and other healthcare professionals. Participants should closely follow the trial's protocol to ensure their safety. Participants are evaluated at the beginning and end of the trial, and their health is monitored continually throughout the trial. Some researchers will stay in touch with participants after the study to perform follow-up tests and/or questionnaires.
While enrolled in the trial, patients should continue to regularly visit their primary healthcare providers. This helps ensure that the clinical trial protocol is not interfering with the patient's regular medications or treatments.
Leaving early: Participants can choose to leave a clinical trial at any time. Patients who want to stop participating should let the researcher(s) know why they are leaving the trial.
Payment: Some clinical trials pay participants to enroll in the study, while others do not. Some trials will reimburse participants for expenses associated with the trial, such as transportation costs, accommodations, meals, or childcare. Potential study participants can discuss whether payment is offered when they meet with the researcher(s). Payment is often not offered if a patient leaves the trial early or does not adhere to protocol.
FINDING HIV/AIDS CLINICAL TRIALS
: AIDSinfo is a U.S. Department of Health and Human Services (DHHS) project that provides current, federally-approved information about HIV/AIDS research, treatment, and prevention to patients, healthcare providers, and the general public. This website contains information about ongoing HIV/AIDS clinical trials that are both federally and privately funded.
: ClinicalTrials.gov provides patients and medical professionals with information about clinical trials that are recruiting study participants. This database, which is updated regularly, contains information on more than 36,100 clinical studies around the world that are both federally and privately funded. Each listing includes the trial's purpose and eligibility requirements.
: MEDLINE, which is a database of more than 10 million references, is a comprehensive source of peer-reviewed medical literature. This database provides free access to research abstracts. Patients can click on the journal links to purchase the full-text of articles. Additionally, patients can visit their local library to attain copies of specific journal articles.
This information has been edited and peer-reviewed by contributors to the Natural Standard Research Collaboration (www.naturalstandard.com).
- AIDSinfo. http://aidsinfo.nih.gov. Accessed May 29, 2007.
- Aronson JK. What is a clinical trial? Br J Clin Pharmacol. 2004 Jul;58(1):1-3. .View abstract
- ClinicalTrials.gov. http://clinicaltrials.gov. Accessed May 29, 2007.
- National Institute of Allergy and Infectious Diseases. www3.niaid.nih.gov. Accessed May 29, 2007.
- Natural Standard: The Authority on Integrative Medicine. www.naturalstandard.com. Copyright © 2007. Accessed May 29, 2007.
- San Francisco AIDS Foundation. www.sfaf.org. Accessed May 29, 2007.
- The Body: The Complete HIV/AIDS Resource. www.thebody.com. Accessed May 29, 2007.
Copyright © 2011 Natural Standard (www.naturalstandard.com) |
Osmosis is the random but directional movement of water molecules from a place where there are many of them to a place where there are fewer. A teaspoon of salt disappears in a glass of water because the individual salt molecules have electrical charges that attract water molecules. The water molecules surround and separate the individual salt molecules, keeping them apart, which is why the salt disappears from sight. The fact that water is drawn to electrically charged molecules is an essential feature that explains why osmosis is important for living things.
Water Molecules Can Move
Diffusion is the random but directional movement of molecules down their concentration gradient. This means that molecules like to have elbow space, so they will move away from places where they are crowded by the same type of molecules. Osmosis is the special case of diffusion because water not only moves because it wants more elbow space, but is drawn to places that have molecules that attract it. Osmosis is the movement of “free” water molecules, meaning the ones that are not “busy” surrounding and isolating salt molecules, from a place where there are many free ones to a place where there are fewer.
Tonicity describes the saltiness of a liquid that is outside a cell relative to the liquid inside it. Tonicity has three different conditions in which osmosis changes the shapes of cells. A hypertonic solution is one that is saltier than the inside of a cell, so water will be attracted out of a cell. A hypotonic solution is one that is less salty than the inside of a cell, so water will be attracted into the cell by the salts in the cell, which causes the cell to swell. An isotonic solution is one that has the same amount of salt as the inside of a cell, so the same amount of water moves in as moves out, meaning the cell does not change shape.
Turgor Pressure in Plant Cells
Plant cells purposely use osmosis to change the shape of their cells. A plant cell is surrounded by a cell wall, which is a wall that gives a cell a shape and a structural support. Since the cell wall is stiff, it prevents the cell from bursting if the cell swells from taking in more water. If a plant wants to stiffen itself, it will pump more salts into its cells, which will draw water in by osmosis. The swelling of the cells makes a plant firm and erect.
Water Retention in Kidneys
Controlling osmosis is one way that the body retains water. When the human body is low on water, the brain tells the kidneys to retain more water by concentrating urine. The brain releases a hormone called antidiuretic hormone (ADH), which flows in the bloodstream to the kidneys. The kidneys contain many tubes called collecting ducts. The collecting ducts are normally impenetrable to water, meaning if water is in the collecting duct, then it is on its way to the bladder. However, ADH causes the walls of the collecting ducts to open water channels. This allows water that would eventually become urine to flow out of the collecting duct and reenter the bloodstream.
- Creatas Images/Creatas/Getty Images |
When you walk around the school look for, or ask about, the following features that demonstrate good practice for supporting children with Speech, Language and Communication Needs (SLCN):
- Visual support systems such as visual timetables, targets on the desk, targets shown on the whiteboard, prompt cards (for example a card, with a picture, to remind a child to listen for their name) and photos.
- A classroom environment that is not too cluttered and where equipment is clearly marked with a label saying what it is.
- Teaching that incorporates use of visual and tactile approaches including use of real objects, practical activities, pictures, video.
- Staff using non-verbal communication to support what they are saying, for example gesture, pointing – or maybe even signing.
- Careful seating arrangements that allow a child with SLCN to be near to the front, and facing the teacher, for example tables placed in a horseshoe shape or tables that can be easily moved around.
- Children given time to respond to allow time for thinking. Time for planning work is also allocated before children are required to begin writing, for example in literacy children are given extra time to think about the key things to include in a story such as the main characters, what is going to happen.
- Strategies are used to ensure a child is paying attention for example the teacher says their name before giving an instruction.
- Language is not too complicated and instructions are short and repeated for those who need it.
- Consistent vocabulary is used, where the same word is used all the time when teaching new subjects (for example take away is used, but not minus or subtract) and understanding is checked where necessary.
- Opportunities for a child to work at their own level, following their own task or targets if needed. This might mean that a child works on slightly different work, at the right level for them, with some extra support from a teaching assistant.
- Additional resources are available if needed, for example IT software, alternative recording sheets with less information or where less writing is needed, work planning sheets.
- Teaching assistant has the necessary skills and knowledge to support children with SLCN. This means they will have received some training about support for pupils with this type of difficulty, and have been given information about SLCN in the classroom by a speech and language therapist.
- Evidence that teaching staff are aware of speech and language therapy (SLT) goals and these are incorporated into lessons wherever possible, for example science vocabulary that a child finds difficult is practised before the lesson and repeated as part of the activities during the lesson.
Was this information useful?: |
How to Find the percent of a number
In this tutorial, we learn how to find the percent of a number. First, take the percent and convert it into a decimal. Then, multiply this by your other number. After this, you will come up with the answer to the problem. An example of this would be if you had the question: what number is 25% of 40. First, you would find the decimal of 25, which is .25. After this, you would multiply that by 40 and then come up with the correct answer, which is 10. Repeat this process for the different equations, then you will be prepared to take on more difficult problems! |
An eclipse is a great opportunity to leverage student excitement into real, interdisciplinary learning. In particular, there are great opportunities to promote interest in STEM (science, technology, engineering, and mathematics) subjects are to show linkages between these subjects and literacy, history, culture, and the arts. You can find many outstanding activities on the web, but to get you started, we’ve included a few here below, divided into three main categories.
Activities to Start Before Eclipse Day
Activities relating to observing the eclipse. You’ll want to start these activities before eclipse day, so that students can make use of them on the day of the eclipse.
- Pinhole Camera Activity (grades 3+). This activity helps students build their own pinhole camera from a milk carton and use it both to explore the optical principles involved and to view the eclipse on eclipse day.
- Dynamic Art Activity (all grade levels). This activity asks students to create an artistic design by projecting images of the Sun through multiple pinholes. It is very simple and fun!
- Photography Activity (grades 5+). For those who want to photograph the eclipse, this activity will help you get started.
Post-Eclipse Interdisciplinary Activities
Activities that will help students build upon their eclipse experience. These activities link the eclipse to literacy, poetry, culture, history, and more.
- Cultural Significance Project (all grades). This project, which varies for different grade levels, asks students to explore at least one culture’s ancient beliefs about eclipses.
Eclipse Science Activities
Activities that focus more on the science behind eclipses, including understanding how eclipses occur and the unique aspects of the Earth-Moon system that make total eclipses possible.
- Using Shadow Measurement (grades 8-12). Students explore the use of similar triangles to measure a tall object of unknown height.
- Exploring Shadows (grades 8-12). Students explore principles of shadows in a controlled setting.
Additional Activity Resources
We hope you’ll enjoy the above activities, which we developed ourselves for Big Kid Science. But there are many more activities that you can find at other sources. Here are a few that we can highly recommend.
- The Night Sky Network offers this great set of activities. We especially recommend the “Yardstick Eclipse Activity” to help students understand upcoming eclipses.
- NASA has this great set of activities for a variety of age groups.
- The Universe Awareness team has created numerous great activities, including some related to eclipses. |
Generalized geographic map of the United States in Late Cretaceous time.
Eighty million years ago, during the Age of Dinosaurs, the geography of North America was very different from that of today. Mountain ranges have changed considerably since this, the Cretaceous Period. The Appalachian Mountains were probably lower and less conspicuous as a source of sediment than they are today; apparently they supplied appreciable amounts of coarse sediment only in the Southeast (Tennessee, Alabama, Mississippi, and Georgia). The Rocky Mountains, as they are today, did not exist; instead a giant trough in which marine sediment slowly accumulated occupied this part of the West. Shallow, warm inland seas covered large portions of the Southern and Western United States, as shown on the map. Large portions of California were under water, and in eastern California, Nevada, Arizona, Idaho, eastern Oregon, Washington, and Alberta, a belt of the Earth's crust slowly rose to form a new mountain range. Details from this vast highland extended eastward into Utah, Wyoming, Colorado, and New Mexico. Still farther east, Kansas, Nebraska, and adjacent States to the north and south were covered by warm, extensive, but shallow seas in which beds of limestone slowly formed.
The Cretaceous Period marked the last extensive covering of the North American continent by the sea. Since then, sea level has dropped and the continent has gradually emerged to its present size and shape.
Appears on these pages
Late Cretaceous refers to the second half of the Cretaceous Period, named after the famous white... |
Conceptual Understanding in Introductory Physics XVPosted: April 12, 2015
Symmetry is part of the foundation of contemporary physics, but it is seldom emphasized in introductory physics in proportion to its significance. There may be some value in discussing how symmetry applies to otherwise traditional introductory problems rather than just replicating numerical examples from a textbook (even a good textbook). These questions illustrate symmetry in electromagnetic theory, but could trivially be adapted to gravitational interactions in mechanics.
Assume space is isotropic. Using only symmetry, argue (no numbers, no equations, only words) that
(a) the electric force between two charged particles must lie along the line connecting the two particles.
(b) the electric field of a very large (so large that its size need not matter) uniformly charged disk must be perpendicular to the disk and must not vary in magnitude with respect to distance from the disk.
(c) the electric field of a particle must be radially toward (or away from) the particle and if it varies in magnitude, must only do so with respect to distance from the particle.
(d) the electric field of a very long (so long that its length need not matter) uniformly charged rod must be perpendicular to the rod and if it varies in magnitude, must only do so with respect to perpendicular distance from the rod.
Here’s a hint. A symmetry implies some transformation that leaves some property unchanged. In each case, think of a change (perhaps a rotation and/or translation) that leaves the system (in this case, a charge distribution) unchanged and then look at any consequences that follow.
As usual, let me know if you present these questions to your students. I’m always interested in the the results. |
Why Do Substances React?
Chemical (and thus, biochemical) reactions only occur to a significant extent if they are energetically favorable. If the products are more stable than the reactants, then in general the reaction will, over time, tend to go forward. Ashes are more stable than wood, so once the energy of activation is supplied (e.g., by a match), the wood will burn. There are plenty of exceptions to the rule, of course, but as a rule of thumb it's pretty safe to say that if the products of a reaction represent a more stable state, then that reaction will go in the forward direction.
There are two factors that determine whether or not reactions changing reactants into products are considered to be favorable: these two factors are simply called enthalpy and entropy.
Simply put, enthalpy is the heat content of a substance (H). Most people have an intuitive understanding of what heat is... we learn as children not to touch the burners on the stove when they are glowing orange. Enthalpy is not the same as that kind of heat. Enthalpy is the sum of all the internal energy of a substance's matter plus its pressure times its volume. Enthalpy is therefore defined by the following equation:
where (all units given in SI)
- H is the enthalpy
- U is the internal energy, (joules)
- P is the pressure of the system, (Pascals)
- and V is the volume, (cubic meters)
If the enthalpy of the reactants while being converted to products ends up decreasing (ΔH < 0), that means that the products have less enthalpy than the reactants and energy is released to the environment. This reaction type is termed exothermic. In the course of most biochemical processes there is little change in pressure or volume, so the change in enthalpy accompanying a reaction generally reflects the change in the internal energy of the system. Thus, exothermic reactions in biochemistry are processes in which the products are lower in energy than the starting materials.
As an example, consider the reaction of glucose with oxygen to give carbon dioxide and water. Strong bonds form in the products, reducing the internal energy of the system relative to the reactants. This is a highly exothermic reaction, releasing 2805 kJ of energy per mole of glucose that burns (ΔH = -2805 kJ/mol). That energy is given off as heat.
|< 0||releases heat||heats up||yes|
|> 0||gains heat||cools down||no|
Entropy (symbol S) is the measure of randomness in something. It represents the most likely of statistical possibilities of a system, so the concept has extremely broad applications. In chemistry of all types, entropy is generally considered important in determining whether or not a reaction goes forward based on the principle that a less-ordered system is more statistically probable than a more-ordered system.
What does that mean, really? Well, if the volcano Mt. Vesuvius erupted next to a Roman-Empire era Mediterranean city, would the volcano be more likely to destroy the city, or build a couple of skyscrapers there? It's pretty obvious what would happen (or, rather, what did happen) because it makes sense to us that natural occurrences favor randomness (destruction) over order (construction, or in this case, skyscrapers). Entropy is just a mathematical way of expressing these essential differences.
When it comes to chemistry, there are three major concepts based on the concept of entropy:
- Intramolecular states (Degrees of freedom)
- The more degrees of freedom (how much the molecules can move in space) a molecule has, the greater the degree of randomness, and thus, the greater the entropy.
- There are three ways molecules can move in space, and each has a name: rotation = movement around an axis, vibration = intramolecular movement of two bonded atoms in relation to each other, and translation = a molecule moving from place to place.
- Intermolecular structures
- When molecules can interact with each other by forming non-covalent bonds a structure is often created.
- This tends to reduce randomness (and thus entropy) since any such association between molecules stabilizes the motion of both and decreases the possibilities for a random distribution.
- Number of possibilities
- The more molecules present, the more ways of distributing the molecules in space - which because of statistical probabilities means more potential for randomness.
- Also, if there is more space available to distribute the molecules within, the randomness increases for precisely the same reason
- solid matter (least entropy) << liquids << gases (most entropy)
Changes in entropy are denoted as ΔS. For the reasons stated above (in the volcano situation), the increase of entropy (ΔS > 0) is considered to be favorable as far as the Universe in general is concerned. A decrease in entropy is generally not considered favorable unless an energetic component in the reaction system can make up for the decrease in entropy (see free energy below).
Gibbs Free Energy
Changes of both enthalpy (ΔH) and entropy (ΔS) combined decide how favorable a reaction is. For instance, burning a piece of wood releases energy (exothermic, favorable) and results in a substance with less structure (CO2 and H2O gas, both of which are less 'ordered' than solid wood). Thus, one could predict that once a piece of wood was set on fire, it would continue to burn until it was gone. The fact that it does so is ascribed to the change in its Gibbs Free Energy.
The overall favorability of a reaction was first described by the prominent chemist Josiah Willard Gibbs, who defined the free energy of a reaction as
- ΔG = ΔH - T ΔS
where T is the temperature on the Kelvin temperature scale. The formula above assumes that pressure and temperature are constant during the reaction, which is almost always the case for biochemical reactions, and so this book makes the same assumption throughout.
The unit of ΔG (for Gibbs) is the "joule" in SI systems, but the unit of "calorie" is also often used because of its convenient relation to the properties of water. This book will use both terms as convenient, but the preference should really be for the SI notation.
What Does ΔG Really Mean?
If ΔG < 0 then the reactants should convert into products (signifying a forward reaction)... eventually. (Gibbs free energy says nothing about a reaction's rate, only its probability.) Likewise, for a given reaction if ΔG > 0 then it is known that the reverse reaction is favored to take place. A state where ΔG = 0 is called equilibrium, and this is the state where the reaction in both the forward and reverse directions take place at the same rate, thus not changing the net effect on the system.
How is equilibrium best explained? Alright, as an example set yourself on the living room carpet with your most gullible younger relative (a little nephew, niece or cousin will work fine). Take out a set of Monopoly, take one ten dollar bill for yourself and give your little relative the rest. Now both of you give the other 5% of all that you have. Do this again, and again, and again-again-again until eventually... you both have the same amount of money. This is precisely what the equilibrium of a reaction means, though equilibrium only very rarely results in an even, 50-50% split of products and reactants.
ΔG naturally varies with the concentration of reactants and products. When ΔG reaches 0, the reaction rate in the forward direction and the reaction rate in the reverse direction are the same, and the concentration of reactants and products no longer appears to change; this state is called the point of chemical equilibrium. You and your gullible little relative have stopped gaining and losing Monopoly money, respectively; you both keep exchanging the same amount each turn. Note again that equilibrium is dynamic. Chemical reaction does not cease at equilibrium, but products are converted to reactants and reactants are converted to products at exactly the same rate.
A small ΔG (that is, a value of ΔG close to 0) indicates that a reaction is somewhat reversible; the reaction can actually run backwards, converting products back to reactants. A very large ΔG (that is, ΔG >> 0 or ΔG << 0) is precisely the opposite, because it indicates that a given reaction is irreversible, i.e., once the reactants become products there are very few molecules that go back to reactants.
The food we consume is processed to become a part of our cells; DNA, proteins, etc. If the biochemical reactions involved in this process were reversible, we would convert our own DNA back to food molecules if we stop eating even for a short period of time. To prevent this from happening, our metabolism is organized in metabolic pathways. These pathways are a series of biochemical reactions which are, as a whole, irreversible. The reactions of a pathway occur in a row, with the products of the first reaction being the reactants of the second, and so on:
- A ⇌ B ⇌ C ⇌ D ⇌ E
At least one of these reactions has to be irreversible, e.g.:
- A ⇀ B ⇌ C ⇌ D ⇀ E
The control of the irreversible steps (e.g., A → B) enables the cell to control the whole pathway and, thus, the amount of reactants used, as well as the amount of products generated.
Some metabolic pathways do have a "way back", but it is not the same pathway backwards. Instead, while using the reversible steps of the existing pathway, at least one of the irreversible reactions is bypassed by another (irreversible) one on the way back from E to A:
- E ⇀ X ⇌ C ⇌ B ⇀ A
This reaction is itself controlled, letting the cell choose the direction in which the pathway is running.
Free energy and equilibrium
For ΔG, the free energy of a reaction, standard conditions were defined:
- concentration of reactants and products at 1M
- temperature at 25°C
- acidity at pH 7.0
Under these standard conditions, ΔG0' is defined as the standard free energy change.
For a reaction
- A + B ⇌ C + D
the ratio of products to reactants is given by keq' (=keq at pH 7.0):
The relationship of ΔG0' and keq' is
- ΔG0' = - R T ln keq' = - R T 2.030 log10 keq'
- R = 8.315 [J mol-1 K-1] (molar gas constant)
- T = temperature [K]
In theory, we can now decide if a reaction is favorable (ΔG0' < 0). However, the reaction might need a catalyst to occur within a reasonable amount of time. In biochemistry, such a catalyst is often called an enzyme.
The purpose of DNA melting or DNA denaturation is emphasizing and demonstrating the life cycles of all organisms and the origin of replication. The origin of replication specific structure varies from species to species. Furthermore, the particular sequence of the origin of replication is in a genome which is the human genes. Nevertheless, DNA replication is also part of origin of replication which examen in the living organism such as prokaryotes and eukaryotes.
Thermodynamically, there are two important contributions on the DNA denaturation. One of them is the breaking all of the hydrogen bonds between the bases in the double helix; the other one is to overcome the stacking stability/energy of bases on top of each other. There are several methods to denature DNA; heat is known as the most common one use in laboratory. We just have to heat the sample to reach above its melting point, the unstack ability of DNA can be then monitored. Melting point and denaturation of DNA depend on several factors: the length of DNA, base-composition of DNA, the condition of the DNA and also the composition of buffer. For instance, the longer DNA will contain more H-bonds and more intermolecular forces compared to the shorter one; therefore, denaturations of longer DNA requires more time and more heat. Base-composition of DNA can also play as a key factor because A:T requires two hydrogen bonds and G:C interaction requires three hydrogen bonds. The region of DNA which contains more A:T will melt/denature more rapidly compared to G:C. We can also see how the condition of DNA is important because condition of DNA is related to whether the DNA is relax, supercoiled, linear or heavily nicked. It is important because it allow us to examine how much intermolecular forces existing in the double helix. Finally, condition of buffer is also playing an essential role to study DNA denaturation because it allow us to control the amount of ions present in the solution during the entire process.
Biologically, DNA denaturation can happen inside the cell during DNA replication or translation. In both cases, DNA denaturation is an essential step and a beginning to start each of the process. Most of the time, denaturation happened because of binding of protein or enzymes to a specific region of DNA, the binding will likely lead to open or denature of the helix. However, the actual meaning of the DNA melting is the denaturation of DNA which changes the structure of DNA from double stranded into single stranded. The processes of DNA denaturation is unwinding the double stranded deoxyribonucleic acid and breaks it into two single stranded by breaking the hydrogen bonding between the bases. DNA denaturation is also known of DNA annealing because it is reservable . The main steps DNA annealing are double helical will go through the denaturation to become partially denatured DNA then it will separated the strands into two single strand of DNA in random coils. |
The theory of psychoanalysis developed by Sigmund Freud bases its view of human nature on determinism.Structure of personality consists of three systems: the id, ego, and superego. The id is the primary source of energy and the basis of instincts existing within the unconscious mind and is driven by what Freud called “the pleasure principle.” This illogical, amoral entity serves to reduce tension and pain while restoring pleasure. The ego controls and regulates personality, remaining in touch with reality while formulating plans of action to satisfy needs. Finally, the superego is the individual’s moral code judging whether action is good or bad. This component also regulates traditions and ideals that are handed down from generation to generation.
Freud identifies key concepts and levels of unconsciousness. Through psychoanalysis, the unconscious is studied with a focus on dreams, behavior, slips of tongue, posthypnotic suggestion, and the use of techniques like free association that provide the client an opportunity to search their thoughts for links to various issues and problems. In this therapy, unconscious thoughts and processes are the basis for all forms of problem symptoms and behaviors.
A significant component of Freud’s approach is the concept of anxiety. Defined as a state of tension, this feeling motivates individuals to action in order to alleviate the uncomfortable state. Although Freud identified a number of types of anxiety, how an individual processes these inputs determines the effect these feeling have on the individual and overall experience of living.
Defense mechanisms are utilized which help individuals cope with anxieties and prevent the ego from becoming overwhelmed. For example :
- Repression results in the individual burying and forgetting the traumatic event in order to reduce painful thoughts and emotions.
- Denial is used to negate the responsibility of accepting or integrating information into one’s life and schema. The individual essentially blinds himself to reality and the potential pain that may accompany the acceptance of this. To defend against a threatening impulse, the individual could respond with the opposite impulse to balance feelings.
- Reaction formation and helps to conceal emotions that would either call into question ones own identity.
- Projection and displacement and represent a key component of Freudian theory. Interestingly enough, this aspect of psychoanalytic theory was developed by Freud’s daughter, Anna.
Freud’s psychosexual stages begin in the first year of life with the oral stage, as a child is fixated on sucking and satisfying the need for food and pleasure. This shifts to the anal stage when the child is ages one to three and begins to develop independence, expressing strong emotions, and accepting personal power. The third stage of psychosexual development is the phallic stage centering on the child’s unconscious and incestuous desires for the parent of the opposite sex. The next stage is latency, where previous sexual urges are replaced by a focus on school, playmates, and sports. This is also a time of socialization as children develop relationships with others. And finally, the genital stage marks the last step in Freud’s psychosexual development and begins at age twelve, usually concluding at age eighteen, although may continue further into life. This stage is a time of sexual development and remains in place as long as the individual remains mentally healthy. |
Johannes Kepler published Book IV of the Epitome of Copernican Astronomy in 1617. In Book IV, Kepler formally presents his three laws of planetary motion that resolve the problems associated with the epicycles of Copernicus’ heliocentric model. Kepler’s First Law is called the Law of Ellipses. It states that “the orbits of the planets are ellipses, with the Sun at one focus.” Kepler’s Second Law is called the Law of Equal Areas in Equal Time. It states that “the line between a planet and the sun sweeps out equal areas in the plane of the planet’s orbit over equal times.” Kepler’s Third Law is called the Law of Harmony. It states that “the time required for a planet to orbit the sun, called its period, is proportional to half the long axis of the ellipse raised to the 3/2 power.” Although Kepler discovered these Laws, he did not know how they worked. Newton solved this problem with his Theory of Gravity.
Nicolaus Copernicus published the Revolutions of the Heavenly Spheres in 1543. In Book I, Copernicus presents his heliocentric model of the universe. He argues that the Sun, not the Earth, is the center of the universe. He also correctly determined the order of the planets. He wrote that the planets revolve around the Sun in the following spheres – Mercury, Venus, Earth, Mars, Jupiter, and Saturn. However, he stubbornly held onto the belief that the planets must revolve around the sun in perfect circles; for God is perfect. This caused him to mistakenly retain Ptolemy’s system of epicycles to explain the motions of the heavens. Continue reading COPERNICUS: Revolutions of the Heavenly Spheres [Introduction—Book I-Ch. 11]
In The Almagest, Ptolemy outlines antiquity’s geocentric model of the universe. The model is based on the assumptions that the universe is spherical, the Earth is a sphere, the Earth is at the center of the universe, and the Earth does not move. Although modern scientific advances have determined that many of these assumptions are false, Ptolemy correctly conjectures that the Earth is spherical (and yes, I know that the Earth is not a perfect sphere) which is an accomplishment considering that Ptolemy wrote The Almagest in 150 AD, a time when modern astronomical observation instruments were unavailable. |
- Ancylostoma infection
- 1. Hookworm, an intestinal parasite that usually causes diarrhea or cramps. Heavy infestation with hookworm can be serious for newborns, children, pregnant women, and persons who are malnourished. Hookworm infections occur mainly in tropical and subtropical climates and affect about 1 billion people — about one-fifth of the world's population. One of the most common species of hookworm, Ancylostoma duodenale, is found in southern Europe, northern Africa, northern Asia, and parts of South America. A second species, Necator americanus, was once widespread in the southeastern US early in the 20th century. Hookworms have a complex life cycle that begins and ends in the small intestine. Hookworm eggs require warm, moist, shaded soil to hatch into larvae. These barely visible larvae penetrate the skin (often through bare feet), are carried to the lungs, go through the respiratory tract to the mouth, are swallowed, and eventually reach the small intestine. This journey takes about a week. In the small intestine, the larvae develop into half-inch-long worms, attach themselves to the intestinal wall, and suck blood. The adult worms produce thousands of eggs. These eggs are passed in the feces (stool). If the eggs contaminate soil and conditions are right, they will hatch, molt, and develop into infective larvae again after 5 to 10 days. Hookworm infection is contracted from contact with soil contaminated by hookworm, by walking barefoot or accidentally swallowing contaminated soil. Children — because they play in dirt and often go barefoot — are at high risk. Since transmission of hookworm infection requires development of the larvae in soil, hookworm cannot be spread person to person. Chronic heavy hookworm infection can damage the growth and development of children. The loss of iron and protein retards growth and mental development, sometimes irreversibly. The first signs of hookworm infection are itching and a rash at the site where the larvae penetrate the skin. These signs may be followed by abdominal pain, diarrhea, loss of appetite and weight loss, and anemia. Hookworm can also cause difficulty breathing, enlargement of the heart, and irregular heartbeat. Hookworm infection have been known to be fatal, particularly in infants. 2. The diagnosis is by identifying hookworm eggs in a stool sample. In countries where hookworm is common and reinfection is likely, light infections are often not treated. In the US, hookworm infections are generally treated for 1-3 days. The drugs are effective and appear to have few side effects. Another stool exam should be repeated 1 to 2 weeks after therapy. If the infection is still present, treatment is repeated. Iron supplements are in order with anemia. People infected with hookworm are half as likely to have asthma as those without it, in keeping with the hygiene hypothesis of asthma — the theory that because of better hygiene, people today tend to get fewer infections so they are at greater risk of allergic conditions such as asthma. To prevent hookworm, do not walk barefoot or contact the soil with bare hands in areas where hookworm is common or there are likely to be feces in the soil or sand.
Medical dictionary. 2011. |
American Indian music is the music that is used, created or performed by Native North Americans, specifically traditional tribal music. In addition to the traditional music of the Native American groups, there now exist pan-tribal and inter-tribal genres as well as distinct Indian subgenres of popular music including: rock, blues, hip hop, classical, film music and reggae, as well as unique popular styles like waila ("chicken scratch").
| || |
|Problems listening to these files? See media help.|
Vocalization and percussion are the most important aspects of traditional Native American music. Vocalization takes many forms, ranging from solo and choral song to responsorial, unison and multipart singing. Percussion, especially drums and rattles, are common accompaniment to keep the rhythm steady for the singers, who generally use their native language or non-lexical vocables (nonsense syllables). Traditional music usually begins with slow and steady beats that grow gradually faster and more emphatic, while various flourishes like drum and rattle tremolos, shouts and accented patterns add variety and signal changes in performance for singers and dancers.
Song texts and sources
Native American song texts include both public pieces and secret songs, said to be "ancient and unchanging", which are used only for sacred and ceremonial purposes. There are also public sacred songs, as well as ritual speeches that are sometimes perceived as musical because of their use of rhythm and melody. These ritual speeches often directly describe the events of a ceremony, and the reasons and ramifications of the night.
Vocables, or lexically meaningless syllables, are a common part of many kinds of Native American songs. They frequently mark the beginning and end of phrases, sections or songs themselves. Often songs make frequent use of vocables and other untranslatable elements. Songs that are translatable include historical songs, like the Navajo "Shi' naasha', which celebrates the end of Navajo internment in Fort Sumner, New Mexico in 1868. Tribal flag songs and national anthems are also a major part of the Native American musical corpus, and are a frequent starter to public ceremonies, especially powwows. Native American music also includes a range of courtship songs, dancing songs and popular American or Canadian tunes like "Amazing Grace, "Dixie", "Jambalaya" and "Sugar Time". Many songs celebrate harvest, planting season or other important times of year.
Source: Wikipedia, Link |
About 7000 years ago in Italy, early farmers practiced an unusual burial ritual known as “defleshing.” When people died, villagers stripped their bones bare, pulled them apart, and mingled them with animal remains in a nearby cave. The practice was meant to separate the dead from the living, researchers say, writing in the latest issue of the journal Antiquity.
“[Defleshing] is something which occurs in burial rites around the world but hasn't been known for prehistoric Europe yet," says John Robb, an archaeologist at the University of Cambridge in the United Kingdom and leader of the research project. Robb and his team examined the scattered bones of at least 22 Neolithic humans—many children—who died between 7200 and 7500 years ago. Their remains were buried in Scaloria Cave, a stalactite-filled grotto in the Tavoliere region of southeastern Italy, where Robb says that they provide the "first well-documented case for early farmers in Europe of people trying to actively deflesh the dead."
The cave—sealed off until its discovery in 1931—was uniquely able to preserve the human remains, which were mixed randomly with animal bones, broken pottery, and stone tools. This level of preservation is unusual: "Neolithic assemblages are often very challenging to interpret, as they are commonly broken, mixed up, and poorly preserved," says Martin Smith, a biological anthropologist at Bournemouth University in the United Kingdom, who was not involved in the research.
Neolithic communities typically buried their dead beneath or beside their homes or on the outskirts of settlements. But in this case, farmers from villages as far as 15 to 20 kilometers away scattered the defleshed bones of their dead in the upper chamber of Scaloria Cave. But why did they do it, and what does this tell us about how they viewed life and death?
To answer these questions, Robb's team performed detailed analyses of the skeletal remains, first excavated in 1978 and now at the University of Cambridge on loan from the Archeological National Museum in Manfredonia, and their context. The results showed that few whole skeletons were present in the cave—only select bones had been interred. Some of the bones had light cut marks, suggesting that only residual muscle tissue needed to be removed by the time of defleshing. That meant the remains were likely deposited as much as a year after death.
Given the evidence, Robb and his team theorize that the defleshing process was part of a long, multistage burial. It isn't known what happened to the bodies in the early stages of these rites, though the lack of animal damage on the bones suggests that they weren't exposed to the elements, meaning that they were either sealed away or buried deep in the ground. What is clear is that the rites ended up to a year later, when select bones were cleaned of their remaining flesh and placed in the cave. This likely marked the end of the mourning process—the deceased's social significance among the community of loved ones now severed by this final transformation into cleaned bones. Relatives were then free to place the remains among other discarded items, animal bones, and broken vessels, perhaps as a symbolic gesture, showing that the transition from life to death was now complete.
Robb contrasts that process with present-day mourning rituals: "Death is a cultural taboo for us. People in our culture tend to shun death and try to have brief, once-and-for-all interactions with the dead. But in many ancient cultures, people had prolonged interaction with the dead, either from long, multistage burial rituals such as this one, or because the dead remained present as ancestors, powerful relics, spirits, or potent memories."
But what was the significance of the cave? Robb and his team further hypothesize that due to the similarity in appearance, bones might have been regarded as equivalent to stalactites. Indeed, noticing the connection between water dripping from the cave ceiling and stalactite formation, the Neolithic Italians had placed vessels beneath the falling liquid to collect it; as the substance that created "stone bones," it likely had a spiritual power. It’s thus possible, the team says, that the cleaning process and deposition in the cave was a way for the living to return the bones to their stonelike origins, both in appearance and location, completing a cycle of incarnation.
The team's comparison between long bones and stalactites is “extremely suggestive," says Mark Pearce, an archaeologist at the University of Nottingham in the United Kingdom, who was not involved in the study. "We know that caves have a great ritual importance in Italian prehistory, and specifically the water that drips from stalactites." Pearce adds: "The Scaloria Cave, with its difficult-to-access lower cave, was clearly a special place for the people of the Tavoliere, and we may imagine that it was thus a suitable place for final death rituals."
"It may be that they regarded life as originating from forces or substances underground," Smith says, "or they may have believed subterranean places to be where the soul traveled to after death. Either way, it gives a level of insight into Neolithic beliefs that we wouldn’t normally have access to." |
Have you ever watched your child play with his/her peers and worried whether he was being cooperative or not?
As parents, when we see that our child is fighting with his/her peers or not playing cooperatively with the other children, we begin to worry.
In this article, we will introduce some ways to help your child become more cooperative.
Children begin to pay attention to their peers from around 3 years old and until they are in elementary school they practice cooperating with each other on a daily basis.
When they become adults, being able to cooperate with those around them becomes a very important trait when forming relationships in the workplace and in social life.
As adults, we begin to understand our place in the world and how it relates to those around us.
It is important for parents to teach their children how to be cooperative by acting as a role model in daily life and in conversations.
Let’s look at the 3 main techniques one at a time.
When children are between 5-6 years old, they begin to recognize the feelings of those around them.
During this period, if parents and teachers teach them the importance of playing with their peers, they will also learn the importance of cooperation.
If your child has trouble joining his peers in activities, you can try joining as well and playing with your child and the other children as a group.
When your child feels comfortable, you can slowly step out and watch from the sidelines.
However, when your child refuses to join the group, it is important that you do not coerce him/her.
Forcing your child to join the group when he/she does not want to will actually have the opposite effect.
In order to be cooperative, children need to learn how to consider other people’s feelings.
Cooperation is being able to understand the relationship between you and others and to understand each other’s feelings.
If your child is nasty to a friend, ask him/her why he/she did what they did and encourage them to think about how their friend feels.
As a result, your child will begin to consider the fact that everyone has trouble expressing their feelings sometimes.
This positive way of thinking will become the foundation for your child’s cooperativeness in the future.
They will also have a more flexible way of thinking and be better able to react to setback and calamity.
When in a group environment, one needs to follow the rules to avoid trouble.
It is important that parents teach their children this from the beginning.
Let’s make sure that they understand that they need to follow the rules and remind friends that are not following the rules that it is important that they do.
It is also important that they know their place in the group. In some groups, they may play the role of leader and in other groups they may be a follower.
Some children always want to be the leader but being able to look at the group environment and determine whether they should be the leader or not is a form of cooperation and is a very important skill.
Children who do not have this skill may run into trouble.
It is important that parents help their children understand the importance of cooperation and help them gain the skills they need to work successfully with their peers.
Parents should keep in mind that they are the role models for their children so they need to set the right examples.
Let’s show our children the importance of cooperation and help them learn how to be cooperative on their own. |
Roman conquest of Britain began in earnest in AD43 when Emperor Claudius’s four legion army captured the southern territories and made them part of the Roman Empire. By AD85 the Romans had pushed northwards into what is now Scotland and defeated the northern tribes at the battle of Mons Graupius.Vindolanda. In AD117 Emperor Hadrian came to power and determined to consolidate the Roman frontiers. Construction of Hadrian’s intensely ambitious boundary probably began early in his reign and he came to inspect the work in progress in AD122.
Building the Wall took three legions of men at least six years to complete. The scale of the Wall’s design was epic. Hadrian’s original plan set out that the Wall would be made of stone or turf, that it would be a maximum 4.6 metres high and approximately 3 metres deep, and that it would cover a total of 84 miles Roman miles from one sea to the other. Along the Wall, each mile was marked by a milecastle, or small fort, and between each pair of milecastles two towers, or turrets, were built at every third of a mile interval. A number of extra forts on the line of the Wall itself were a late addition to Hadrian’s original plan, and were perhaps ordered by him when he visited in AD122. These forts were set around 7 miles apart along the Wall, straddling it where the land would allow.
Where the Wall finished, this network of forts continued down to Ravenglass on the Cumbrian coast. Another feature that was introduced after the original designs were completed was a 6-metre deep ditch called the Vallum. This extra defence ran parallel to the Wall’s southern side, protecting the Romans from a rear attack and it is still visible today along much of the route.
When Hadrian died in AD138, the new emperor, Antoninus Pius, pushed the frontier north to run between the Forth and the Clyde. However, occupation of the Antonine Wall lasted only around 25 years before the Romans returned to refurbish and restore Hadrian’s Wall. They did not restore the Vallum, which suggests that the area around Hadrian’s Wall had been pacified by this time.Wall was manned by auxiliary units recruited from every corner of the empire, each garrison numbering between 500 and 1000 men. It’s thought that each fort along the route was designed to house a singe auxiliary unit, which would have comprised either infantry or cavalry soldiers, or both. Cavalry forts were established at key locations near river crossings or where major roads crossed the line of the Wall. Communities of merchants, traders and camp followers were established close to the forts. These settlements would have been home to a cosmopolitan mix of locals and incomers from across the Empire, including retired soldiers, serving soldiers’ dependents, and individuals who made a living from servicing the fort - everyone from blacksmiths and food-vendors to innkeepers and prostitutes.
Although details are scarce regarding the Wall’s 300-year active lifespan, historical sources suggest that frontier battles and skirmishes occurred from time to time during the late second and early third centuries AD with a major uprising not long after AD180. The Emperor Septimius Severus brought a vast army to Britain in AD 208 and campaigned far north of Hadrian’s Wall. He fell ill and died in York in AD 211. His sons made a peace with the northern tribes that seems to have lasted for over 100 years. When Roman Imperial rule over Britain ended in the early fifth century, many of the forts along Hadrian’s Wall continued to be occupied through the fifth and into the sixth centuries AD. The fort commanders and their soldiers appear to have taken responsibility for local security.
The Wall seems to have survived in a reasonable state of preservation into the Elizabethan period of the 16th century when there were even proposals to rebuild it due to the ongoing tension and conflict with Scotland and the lack of security created by the lawless Border Reivers. From this period though, stone from the Wall was increasingly taken and used to build houses, churches and farms across Cumbria, Northumberland and Tyne & Wear. In the 1800s people interested in antiquities began to speak out and prevent this from happening. By the mid-19th century, momentum behind the drive to preserve the Wall had grown and Victorian archaeologists and historians began to contribute to an understanding of the Roman frontier that continues to grow today. |
What are the origins of Easter?
Question: "What are the origins of Easter?"
Answer: The origins of Easter are obscure. It is often assumed that the name Easter comes from a pagan figure called Eastre (or Eostre) who was celebrated as the goddess of spring by the Saxons of Northern Europe. According to the theory, Eastre was the “goddess of the east (from where the sun rises),” her symbol was the hare (a symbol of fertility), and a festival called Eastre was held during the spring equinox by the Saxons to honor her. This theory on the origin of Easter is highly problematic, however.
The major problem with associating the origin of Easter with the pagan goddess Eastre/Eostre is that we have no hard evidence that such a goddess was ever worshiped by anyone, anywhere. The only mention of Eastre comes from a passing reference in the writings of the Venerable Bede, an eighth-century monk and historian. Bede wrote, “Eosturmononath has a name which is now translated as ‘Paschal month,’ and which was once called after a goddess of theirs named Eostre, in whose honor feasts were celebrated in that month. Now they designate the Paschal season by her name, calling the joys of the new rite by the time-honoured name of the old observance” (De Temporum Ratione). And that’s it. Eostre is not mentioned in any other ancient writing; we have found no shrines, no altars, nothing to document the worship of Eastre. It is possible that Bede simply extrapolated the name of the goddess from the name of the month.
In the nineteenth century, the German folklorist Jakob Grimm researched the origins of the German name for Easter, Ostern, which in Old High German was Ostarâ. Both words are related to the German word for “east,” ost. Grimm, while admitting that he could find no solid link between Easter and pagan celebrations, made the assumption that Ostara was probably the name of a German goddess. Like Eastre, the goddess Ostara was based entirely on supposition and conjecture; before Grimm’s Deustche Mythologie (1835), there was no mention of the goddess in any writings.
So, while the word Easter most likely comes from an old word for “east” or the name of a springtime month, we don’t have much evidence that suggests anything more. Assertions that Easter is pagan or that Christians have appropriated a goddess-holiday are untenable. Today, however, it seems that Easter might as well have pagan origins, since it has been almost completely commercialized—the world’s focus is on Easter eggs, Easter candy, and the Easter bunny.
Christians celebrate Easter as the resurrection of Christ on the third day after His crucifixion. It is the oldest Christian holiday and the most important day of the church year because of the significance of the crucifixion and resurrection of Jesus Christ, the events upon which Christianity is based (1 Corinthians 15:14). In some Christian traditions, Easter Sunday is preceded by the season of Lent, a 40-day period of fasting and repentance culminating in Holy Week and followed by a 50-day Easter season that stretches from Easter to Pentecost.
Because of the commercialization and possible pagan origins of Easter, many churches prefer to call it “Resurrection Sunday.” The rationale is that, the more we focus on Christ and His work on our behalf, the better. Paul says that without the resurrection of Christ our faith is futile (1 Corinthians 15:17). What more wonderful reason could we have to celebrate! Whether we call it “Easter” or “Resurrection Sunday,” what is important is the reason for our celebration, which is that Christ is alive, making it possible for us to have eternal life (Romans 6:4)!
Should we celebrate Easter or allow our children to go on Easter egg hunts? This is a question both parents and church leaders struggle with. Ultimately, it comes down to a matter of conscience (Romans 14:5). There is nothing essentially evil about painting and hiding eggs and having children search for them. What is important is our focus. If our focus is on Christ, our children can be taught to understand that the eggs are just a fun game. Children should know the true meaning of the day, and parents and the church have a responsibility to teach the true meaning. In the end, participation in Easter egg hunts and other secular traditions must be left up to the discretion of parents.
Recommended Resource: The Case for the Resurrection of Jesus by Gary Habermas
Who was Ishtar, and is there any connection between Ishtar and Easter?
What is Easter Sunday?
How is the date for Easter determined?
What is Good Friday / Holy Friday?
What is the origin of the Easter bunny and Easter eggs?
Questions about Easter
What are the origins of Easter? |
In computer software, a hash
is a relatively small integer derived from a larger integer, string or some kind of object, a key or a file.
The number is produced by a hash function.
For example, a simple, although very poor way, to calculate a hash of a string is by adding the ascii codes for each digit together. The answer is a hash of the string. Note that two strings such as "ab" and "ba" would have the same hash. It's quite normal for two strings to hash to the same number, in fact it's nearly always inevitable because the hash is shorter. This is called a 'collision'.
Hashes are often used in conjunction with a hash table; but need not be.
Alternatively a hash can also be calculated and stored; it can then be used as a quick way to prove that two things aren't 'equal' very rapidly- clearly if the hashes are different, then you know straight away they aren't the same.
However, if the hashes match, you often still need to do a full comparison of the data to check that it wasn't a collision; but in many cases, the full comparison will usually almost immediately fail for collisions, because the hash compares them in a different way.
Also, if a large enough hash and a very carefully chosen hash function is used, the chances of finding two things being different, but having the same hash can be incredibly small, small enough that you wouldn't find an example if you looked for billions of years on the fastest computer. In that case, you may not need to do the full comparison at all; this is used as a way of instantly comparing files, even if the files are many gigabytes in size. This idea is used very extensively.
Hashes are used for digital signing of documents, files, or programs- any modification to the document changes the hash. The hashes used for this purpose are very carefully designed using cryptographic theory for extremely high security so that people deliberately trying to construct files which have the same hash are unable to do so in any reasonable time. |
BRITAIN experienced a tidal wave of immigration as soon as the last Ice Age ended, new data has shown.
Previously it was thought that Britain's repopulation was a slow process led by a few pioneering explorers.
Researchers now know that humans responded rapidly to climate change and moved into Britain en masse as soon as the ice receded.
Up to 20,000 years ago a huge ice sheet extended as far south as Norfolk. Then temperatures rose rapidly.
Once the glaciers retreated, people returned to what was then the British peninsula.
Combining the dating of artefacts with climate information has shown
that reoccupation was almost an instantaneous event across northern and central Europe, says Oxford Brookes University |
If you have read the sister post on my r-controlled vowels here, you would have heard my spill about how I like to set foundations and build up when it comes to mastering phonics skills. I have these levels I like to use. First is identification, then isolation, blending, segmenting, addition (adding sounds together), and substitution. Once students get those foundations set, then they are well prepared to spell the words, read the words, and write with the words. Now if you are saying, say what to all that, I actually have broken down each area below and put some suggested activities with each. With many of the areas, some phonemic awareness activities can be used during carpet time or in small groups. However, I have included some actual phonics ideas with each area too.
1. Identify Diphthongs. Here students are learning to identify words that contain those diphthongs. This typically involves distinguishing between words that contain a diphthong and those that don’t.
Identifying Diphthongs Activity #1: For a phonemic awareness activity, you can simply name words. Than students can do one of the following reactions below as a response to the word. Of course, for the diphthongs that sound alike, you can choose between one of the pairs to decrease confusion. I just wanted to throw some options out there for you to choose. 😉
Students can put on a frown for words with the “ow” sound.
Students can hold their ears like it is loud for “ou”.
They can make crab claws for “aw”.
They can launch when they hear “au” by putting their hands together and making a rocket go through the air.
Students can give “the look” for words with “oo” as in look. (Haha, I love this one…isn’t she so precious! And having a class giving the look…hysterical!)
They can reach for the moon for “oo”.
They can point for “oi”.
They can shake their head with joy for “oy”.
Identifying Diphthong Activity #2: Sort words for Phonics Activity. Students can sort words that have the diphthong, and those that do not.
2. Isolate Diphthongs. Here students isolate or locate where the diphthong sound is in the word.
Isolating Activity #1: Where it is. You can place out just about anything for this activity, counters, traditional box frame, and TOYS!!! You say a word and students tap if the diphthong is in the beginning, middle, or end of the word.
Isolating Activity #2: Students show the isolated sounds by writing the grapheme where it belongs.
3. Blend Diphthongs. Here students take the individual sounds and put it all together. Getting students to blend the words fluently for reading is the goal.
Blending Activity #1: Say each phoneme separately and have students blend them together. Example: Say h-oo-p, and Students say hoop.
Blending Activity #2: Blend the letters. You can write some words on some index cards and let students practice blending the words in small groups or as a center activity.
Blending Activity #3: Show your blending. Let them show you their blending on paper with activities that require them to connect words to pictures.
4. Segment Diphthongs. Here students break apart words that contain diphthongs. An example is the word boot. B-oo-t. This is different from isolating the sounds because here students are dealing with each sound in the word, instead of only isolating where the diphthong is located.
Segmenting Activity #1: Toy stomp the sounds. For a phonemic awareness activity, students use a toy to stomp out the phonemes in the word. Example: Stomp Book. B-oo-k.
Segmenting Activity #2: Segmenting in the box. You can actually do a simple phonemic activity with this. Say a word. Students point to each box as they say each segmented sound.
Segmenting Activity #3: Segmenting in the boxes (phonics style). Students segment the sounds by writing the phonemes in the boxes.
5. Add the phonemes (or graphemes) together. Here additional phonemes (or graphemes) are added.
Adding Activity #1: When aws become paws. Say some words and have students add an additional phoneme. Example: What word would you have if you added “p” to aw? Paw. What word would you have if you added “t” to oy? Toy. What word would you have if you added “h” to owl? Howl. (Isn’t the kitten sooooo adorable?)
Adding Activity #2: Word adding. If you have some dice with graphemes on them, students can do some word adding to form words.
6. Sub Diphthongs. Here students play around with the words by subbing phonemes in the word for other phonemes.
Subbing Activity #1: Turning books into cooks. Say some words and have students sub one of the phonemes for another phoneme. Example: If you take the word, book, and change “b” to “c,” what do you get? Cook. If I take the word, down, and change “d” to “cl,” what do I get? Clown.
Subbing Activity #2: Word Claws. Give some word cards to students and a toy claw . Have students pinch off the phoneme and add a different letter on top to form a new word.
Subbing Activity #3: Students can sub the beginning grapheme for another sound and illustrate the new word.
7. Moving Up to Spelling with Diphthongs. I like to incorporate spelling too. I think it important students have time to develop some phonemic awareness and phonics skills with the words before they are expected to memorize a bunch of spelling words though. Just makes spelling much more easier!
Spelling Activity #1: Build the words. Students can use letter manipulatives to build the words.
Spelling Activity #2: Crossword Puzzles. I love using crossword puzzles for spelling practice because students are practicing their spelling, and they don’t even realize it!
8. Comprehending Words with Diphthongs. It’s important that students read words with diphthongs within context.
Comprehending Words Activity: Students can identify words they see with the diphthongs in books or passages they read.
9. Incorporating Higher-Order Thinking with Diphthongs. I believe it is important to push the boundaries of simple identification and application to synthesis and creativity. Ok, say what? It’s important to push them up on that Bloom’s Taxonomy with higher order thinking aka use their brains!
Higher-order thinking Activity: Have students use a list of diphthong words to write a fun, short story.
Some organization inspiration. Now that I shared many activity ideas, I would like to share how I organize my no prep printables. I have these broken into different sections with the section dividers I created and include in the resource. It is so nice to have them all pulled together like this, and the levels build up from identifying to writing.
I hope I inspired you with many ideas for teaching those diphthongs! Make sure to get on my email list for some more inspiring ideas right to your box! I’m not into spamming, I promise!
Thanks for stopping by the Candy Class! |
Ancient Egypt: 5500
There are four main time periods in Ancient Egyptian History: Predynastic, Old Kingdom, Middle Kingdom, and New Kingdom. The Predynastic
period (3500-3100 BC) ended when the Upper and Lower Egyptian areas unified under the first Pharaoh, Narmer. The Old Kingdom
(2686-2181 BC) was an extremely prosperous time that allowed the Pharaoh’s to build pyramids. Then came the Middle Kingdom
(2055-1650 BC) which was the peak for the time of the Pharaoh. The last major period was the New Kingdom (1550-1069 BC) that
was the last prosperous period that died with Rameses III. Egypt continued to
have its own land, but soon, the Persians, Alexander the Great, and then Augustus Caesar would have their turns battling for
the land, until it became a part of the Holy Roman Empire.
Art is an important intermediary between the living and the dead in Egyptian culture. Artists in Egypt, as well as in pre-historic
times, were powerful people. The artists were servants with the pharaoh. They made pictures of hunting, boats, cooking, daily
life and death.
Egyptians are the first society to create art largely based on the human form. Remember that the style
is directly connected to the communication of the art. Generally, the way artists communicate changes quickly as time goes
on. For example, Impressionism was popular for some forty years until the style shifted. The Egyptians kept the same unifying
style for over 3,000 years. How could that ever have been done? A scale or template was mathematically developed to dictate
to the artist the way art was to be created. This template was developed to keep order and consistency in the empire‘s
art and architecture. A grid was placed onto the wall before the painting or carving of a wall began. A drawing of the figure
was then placed over the grid before that design was produced on to the wall.
Egyptians are also well known for their sculptures. Teams of the most skilled sculptors created temple decorations, monuments,
and likeness of their Pharaoh. Using a scaffolding system, they chiseled, sanded and polished stone into a work of art. Much
of this same process used to carve stone was applied to wooden works including; coffins, statues, furniture, boats, and many
other utilitarian works. Other works, sculpted from metals like gold and copper, were not only great works held in high esteem
by the Pharaoh, but at times, also served as tools like chisels, builders squares, and levels.
One of the first major immolators of the art of ceramics were the Egyptians. Nile River clay was used to hand build many
pots, vessels, and sculptures. Later, between 2494 and 2345 BC, they developed a potters wheel that helped create symmetrical
works. These works were functional in their daily lives to prepare, serve, and store food and liquids. To better color and
decorate their wares, the Egyptians invented glazes.
Mummification is as identified with Egyptian culture as the pyramids. The four step process of making
mummies is an interesting one. First, the embalming priest takes the identity of Anubis, the jackal faced overseer
of the grave and manager of souls, as he removes the brain and organs. These important organs are
dried, wrapped, and placed into canopic jars to be overseen by the four Sons of Horus: Imsety the human who looks over the
liver, Duamutef the jackal who looks after the stomach, Hapy the ape who looks after the lungs, and Qebehsenuef the falcon
who looks after the intestines. The second step in this process is to dry the body with salt. Over
a number of days the body, on a specially designed table that helps bodily fluid runoff, is dried under a large amount of
salt. The body is stuffed with linen, salt, sawdust, and spices. The third step is for the body to be
wrapped in linen bandages. As much as four thousand square feet of linen was used in this process. Protective charms
and jewels are placed between the layers of the linen. The final step for the mummified remains is for a
mask to be placed on the dead and the dear is placed into one or more coffins and sarcophagi. Although we may think
of the elaborate gold masks, it was more common for those to be more simple planks of wood with the person's face painted onto the plank.
The coffin, or as many as three, are placed into the sarcophagus, which is generally a stone box that holds the coffins.
One of the most important of the Old Kingdom’s third dynasty pharaohs was Zoser (sometimes spelled Djoser).
He ruled at a time of prosperity in Egypt from 2667-2648 BC. Because the society could economically afford to spend time,
funds, and energy on larger projects, Zoser was more than willing to approve such projects. As Pharaoh of Egypt, he wanted
to showoff his wealth and power by building himself an impressive ziggurat-like tomb at Saqqara. The building supervisor for
this "Stepped Pyrimid" was Imhotep.
Imhotep was very powerful in this society. He worked and was educated as an architect, doctor, high priest,
scribe, poet, astrologer, and the founder of a healing cult. With his power and knowledge, Imhotep’s legacy would last
for centuries, eventually building up to his being seen as a god. But the recognition that went to Zoser is an interesting
part of the story.
Artists at this time did not sign artwork because it was thought that the artist would take the privileges
of the dead Pharaoh. But in Zoser's Tomb, Imhotep is given credit for the design of the tomb, largely due to his high standing
in the society. This exceptional distinction gives Imhotep the label as history’s first known artist.
This structure was designed twice because the first plan was not grand enough for Zoser's illustrious opinion of himself.
The final six-stepped pyramid was a giant compared to all other royal tombs before it. It is 459 feet long, 387 feet wide,
and nearly 197 feet high. This is an impressive scale considering that the Egyptians had limited knowledge of technologies
and limited materials. In this construction, Imhotep was the first to move from mud to stone as the main building material.
He then cased the stone in a limestone shell. The Pyramid was a simple geometric shape that Imhotep selected and the builders
had the knowledge to build it. There was also a religious connection with the shape through the hieroglyphs of Ra and a relationship
with the sun. The Pyramid is the center of a large complex that is protected by a 30-foot wall. The complex includes courtyards,
temples, and other structures used for religious purposes. The burial chambers are underground, and were hidden in a maze
of tunnels to discourage grave robbers. The maze did not work in deterring these robbers, however. All that currently remains
of Zoser and all of his empire is an empty tomb and a single mummified foot.
One of the great Pharaohs of the Old Kingdom was Khufu, who lived 2589-2566 BC. Although considered a great and wise Pharaoh
in Egypt, the most lasting part of his twenty-three year reign is his pyramid, also known as the Great Pyramid. A student in my class would be wise to know several features of this pyramid, of which there
are many. It was designed to be the largest tomb built in all the world (that was known). The Pharaoh, his family, servants,
and government officials were all buried in the complex around this pyramid. This single structure is 482 feet high.
It was not until 1880, that another structure exceeded it in height (the Cologne Cathedral in Germany). At its base, the pyramid
covers thirteen acres square. There are over two million blocks in its construction, weighing 4500 pounds each. That’s
over five million tons of limestone. A cantilever crane and other lifting devises were probably used to move the very large
stones into place. With this volume of blocks to lift into place, and keeping in mind that it took twenty
three years to build, a stone had to be set in place on an average of every five minutes. That is heavy work done fast!
We also should remember that the Egyptians had no knowledge of the wheel at this point of their history either. This
pyramid is also known for its false chambers, a gold top beacon, and it was originally cased in a smooth white limestone. It is considered to be the largest stone building
ever to have been built. However, there is current exploration of a newly discovered pyramid near San Bartolo, Guatemala that
might top the massive Great Pyramid in Egypt. Regardless, we cannot take away from the impressiveness of Khufu’s Pyramid.
It was so impressive of a structure that historians Herodotus and Callimachus listed this “Great Pyramid of Giza”
as one of the original Seven Wonders of the World. Of those seven, the Great Pyramid is the only one that still stands today.
When Khufu died, his body was placed in the red granite tomb chamber within a red granite sarcophagus. After the entombment,
the pharaohs tomb was sealed with three stone slabs to act as trapdoors. This did not stop the tomb from being plundered.
At some point, the riches of Khufu were stolen along with his mummy. In all likelihood, all of the riches of Khufu are
lost for all time.
As the middle Kingdom approached, the pharaohs lost power to local leadership. This also caused a divide between the upper
and lower regions in Egypt. The region was reunited under Mentuhotep II, who ruled from 2055-2004 BC, after crushing his enemies.
Although now united, Egypt fell into a poor economic state and small pyramids had to be made to conserve funds. Soon, tombs
were built into the sides of cliffs. Support colonnades, or rows of columns, were carved from the
solid rock of the cliff. Egypt began to war with the Hyksos (Turkey). The warriors of Hykso had advanced weapons and horses.
Egypt was defeated. The Egyptians learned how to use these newly discovered weapons and soon became skilled with the new war
technology. Egypt retook its land. This began the New Kingdom.
King Tut's Tomb:
The Fall of Ancient Egypt:
The last great warrior pharaoh was Rameses II. Demonstrating great military, political, and economic power for sixty-six
years, Rameses II earned his nickname, Rameses the Great. With his military knowledge, he was successful on several campaigns,
but that time would come to an end. Soon the "Sea People," the predecessors of the Greeks and Turks, attempted to settle in
Egypt. They were stopped by Rameses III, but with pending attacks by the Libyans, Assyrians, Nubians, and Persians severely
weakened Egypt. The next attack on Egypt would break the country. In 333 BC Alexander the Great begins to battle in Egypt.
The following year, in 332 BC, Alexander the Great conquered Egypt. To legitimize his power, Alexander the Great was named
Pharaoh of Egypt. He soon died leaving the region in the power of the Macedonian empire, but the region was soon divided into
The last ruler of ancient Egypt was the queen, Cleopatra. Always loyal to Egypt’s independence, she went as far as
having a relationship with Julius Caesar to keep Egypt independent. After Caesar’s assassination, the governing of Egypt
was shared between Cleopatra and Mark Antony, who would soon get married. Octavian Caesar saw this as treason against Rome,
so he waged war against Antony and Cleopatra. Fearing immanent death when captured, Antony commits suicide, an act soon followed
by Cleopatra in 32 BC. Her death marks the end of the of pharaoh rule in Egypt and the start of Egypt’s role as a part
of the Roman Empire. |
What is a tell? The ancient settlements of the Near East, from the Neolithic period to the Bronze Age, typically form large artificial mounds which are known from their Arabic name as tell sites. These form characteristic features of the landscape over an area reaching from Balkan Europe to north-west India. They occur in areas of low rainfall, where mudbrick was used as a construction-material. The accumulation of settlement debris seems to have been deliberately encouraged, to raise the villages above the plain for environmental reasons (protection from floods, ventilation) and increasingly also for defence. Because these settlements were typically located in optimal positions adjacent to water-sources and fertile land, they were often occupied more or less continuously for many millennia. Some developed into towns or cities, and achieved considerable size, with fortifications and sometimes extended urban areas. Not all sites at this time took the form of mounds, and the tell-mounds themselves were usually abandoned by the Iron Age in favour of flat settlements, though some (like Arbil in Iraq) are still occupied today (see picture).
Arbil, Iraq: a living tell.
These sites thus span a whole range of categories, from cities to small villages or hamlets, and the distribution of the larger examples gives a good idea of the extent of early land-occupation and the structure of settlement within it. Only a tiny proportion of them have been excavated, and only a minority have even been recorded. Much archaeological fieldwork is devoted to locating and plotting them, and dating them by the characteristic forms of pottery which they contain. Their study allows a systematic view of patterns of settlement and social organization as they changed through time.
These "accidental monuments" thus form an invaluable resource for archaeology, since they preserve a record of the material culture of successive communities, with characteristic architecture and broken artefacts of everyday life, covering the millennia which saw the emergence of farming and village life and the growth of the first urban communities. They are precious relics of a formative period of prehistory and early history. Today, with large-scale alteration of the landscape for agriculture, industry and transport, they are subject to an unparalleled degree of destruction. It is thus vital to accumulate as complete a record as possible of these relics of early human habitation, which are the primary evidence for reconstructing ancient ways of life and the development of early societies.
For an excellent introduction to the landscape archaeology of the Near East, by a writer who has been at the forefront of the investigation of tell sites, seeArchaeological Landscapes of the Near East by Tony J Wilkinson (2004). See an extract; read a review.
The information gained from satellite imagery forms a major resource for investigating such sites. The first imagery with sufficient resolution to provide recognizable pictures of tells was CORONA imagery taken from 1960-72 for military purposes and de-classified by the US government from 1995 onwards. This was eagerly acquired and analysed by archaeologists working on the early settlement-history of the Near East, notably Jason Ur and Carrie Hritz (then doing PhDs in Chicago) and Jennifer Pournelle (then doing a PhD in San Diego). These provided a new perspective on key areas of early urban life.
Tell Brak, Syria: in profile, and on satellite imagery (right, a CORONA image).Tell Brak Links: Project site and Learning site
See also Tell Brak in ArchAtlas Visualisation
The situation was further revolutionized in 2000, however, with the Shuttle Radar Topographic Mission (SRTM) undertaken by NASA. For the first time it became possible to interrogate a systematic set of data in the form of a digital elevation model (DEM) with sufficient resolution to record these artificial phenomena. In other words, tells (or at least the larger ones) suddenly became routinely visible from space! This realization was recorded in an article in Antiquity in 2004, marking a new phase in the investigation of this phenomenon.
Tell Brak and its surroundings on the 90m SRTM digital elevation model (with tell-sites marked), and as a 3-D cutout with draped Landsat imagery.
What is this new information? During its eleven-day mission in February 2000, Space Shuttle Endeavour carried a specially-modified radar system which obtained elevation data for the greater part of the globe, capable of generating the most complete high-resolution digital topographic database of Earth. This SRTM data-set was collected at 1 arc-second resolution, the equivalent of 30m on the ground. For areas outside the US, this information has been sampled at 3 arc-seconds (which is 1/1200th of a degree of latitude and longitude, or about 90 meters) and made available in preliminary form. (It is hoped that the full-resolution data will ultimately be released.) Even at the 3 arc-seconds level of resolution, however, a multitude of early settlement-mounds are visible, and it became possible to form a preliminary picture of the distribution of these sites. The art of tellspotting was born.
Because settlement-mounds have characteristic sizes and shapes, and are generally situated in flat plains from which they stand out with particular clarity, the larger examples (particularly those of Bronze Age date) are relatively easy to spot by eye in areas where they are known to occur. (A more rigorous procedure will be described below.) These data make possible a whole range of archaeological investigations which are fundamental to our appreciation of ancient landscapes and their evolution. In the first place, they make it possible to provide accurate locational data for sites often poorly mapped, or described in the literature simply by distance from the nearest town. In the second place it becomes possible to prepare lists of potential sites in areas not yet subject to archaeological survey. Of course, such identifications can only be preliminary, and it is necessary to distinguish ancient settlement-mounds both from natural features (especially small volcanic phenomena) and more recent artificial accumulations. The combination of SRTM data with satellite imagery is an important step in doing this, but ultimately there is no substitute for "ground-truthing". Nevertheless the areas of intensive survey will always be limited, and by combining distribution-maps of known sites with identifications of tell sites, it is possible to gain a more complete impression of early settlement than ever before.
Location of the sample area, including parts of Turkey, Syria and Iraq, with the distribution of prominent tells visible on the SRTM digital elevation model.
To provide some idea of the scope of these data, the area between the north-east Mediterranean (Gulf of Iskenderun) and the middle Tigris (the heartland of ancient Assyria) has been rapidly scanned and prominent tell sites have been marked. This area represents the upper segment of the Fertile Crescent, the arc of fertile, rainfed land between the mountains and the steppes, where sedentary populations were concentrated. (This map thus represents a first approximation to a picture of relative population density over the entire area in later prehistoric and early historic times.) The identifications were made at a scale of approximately 1:50,000, the limit of resolution for the Landsat imagery. Only the most prominent and obvious mounds have been marked. Because the full file would be too large to upload, three sample areas have been selected (the Amuq, the upper Balikh and the eastern Jazira). For reproduction here, images at a scale of approximately 1:200,000 have been created, both as a simple SRTM terrain model, and as a Landsat 7 image draped over the terrain model (both preceded by a locator map at an intermediate scale). The full distribution-map is also provided, without the location-squares for the sample areas, in three forms: as a terrain model; as a flat Landsat image, and as a draped Landsat image. Follow the instructions to open new windows to see these images.
The sample area, with prominent mounds visible on the SRTM 90m terrain model interpreted as tell sites. Yellow squares indicate the areas illustrated at higher resolution .
Click on the yellow boxes to enter the areas
Not all sites (even large ones) appear on this image: note that more tells existed in the plains now drowned by large water-reservoirs created by damming the Euphrates and Tigris, and that they will not have been spotted in areas of undulating topography, even where they are known to exist (e.g. in the Malatya basin). Nevertheless the overall pattern is probably broadly representative, and shows such large, stable settlements as being typically located (often at considerable density) in the well watered sections of flat plains. Less permanent settlements undoubtedly existed contemporaneously, albeit at lower density, in the intervening areas. The clear correlation with topographic and vegetational conditions explains the discontinuous nature of the distribution, interrupted by areas of dry country or more rugged terrain. Both the vegetation and the terrain map are required to understand the distribution. This map is the beginning of the investigation, however, and not its end! (Here begins an entirely new science - as Virchow said of Schliemann's excavations at Troy.)
Rectangles not clickable
One of the most intensively studied segments of the northern Fertile Crescent is the Upper Khabur catchment in Syria (grey-shaded area within the red rectangle), demonstrated from satellite imagery to have been intensively settled in early historic times. This area is a test-bed of new techniques.
There is a more sophisticated way of finding tell-sites from digital elevation data and satellite images than simply eyeballing the pictures. This is to use computer-based pattern recognition procedures, and train the program on authenticated examples. Björn Menze, of the Interdisciplinary Centre for Scientific Computing at the University of Heidelberg, outlines his answer. This experiment uses a well-studied part of the region illustrated above, the Khabur basin (studied by Jason Ur), to illustrate the power of quantitative methods.
|OpenAtlas sites referred to in, or relevant to, this essay:|
SEE NOW THE FOLLOWING PUBLICATIONS
B.H. Menze , J.A. Ur , A.G. Sherratt Tell spotting: surveying Near Eastern settlement mounds from space
Paper delivered at the CIPA 2005, XX International Symposium (International Cooperation to Save the World´s Cultural Heritage) Torino, Italy, 26 September to 1 October 2005
Download as a pdf file from: http://cipa.icomos.org/fileadmin/papers/Torino2005/458.pdf
B.H. Menze , J.A. Ur , A.G. Sherratt Detection of ancient settlement mounds – Archaeological survey based on the SRTM terrain model
To appear in the special issue of Photogrammetric Engineering and Remote Sensing devoted to documentation of the quality and characteristics of data from the Shuttle Radar Topography Mission, and to describe applications benefiting from the data, to be published in February 2006. |
Divided Family in Civil War America, by Amy Murrell Taylor
University of North Carolina Press.
Description from publisher:
The Civil War has long been described as a war pitting “brother against brother.” The divided family is an enduring metaphor for the divided nation, but it also accurately reflects the reality of America’s bloodiest war. Connecting the metaphor to the real experiences of families whose households were split by conflicting opinions about the war, Amy Murrell Taylor provides a social and cultural history of the divided family in Civil War America.
In hundreds of border state households, brothers–and sisters–really did fight one another, while fathers and sons argued over secession and husbands and wives struggled with opposing national loyalties. Even enslaved men and women found themselves divided over how to respond to the war. Taylor studies letters, diaries, newspapers, and government documents to understand how families coped with the unprecedented intrusion of war into their private lives. Family divisions inflamed the national crisis while simultaneously embodying it on a small scale–something noticed by writers of popular fiction and political rhetoric, who drew explicit connections between the ordeal of divided families and that of the nation. Weaving together an analysis of this popular imagery with the experiences of real families, Taylor demonstrates how the effects of the Civil War went far beyond the battlefield to penetrate many facets of everyday life.
About the author
Amy Murrell Taylor is assistant professor of history at the State University of New York at Albany.
From the Introduction of the book:
Abraham Lincoln warned in 1858 that a “house divided against itself cannot stand.” His words, prophetic of the war that was to come three years later, continue to resonate today. That phrase—just one part of a much larger address—has become one of Lincoln’s most recognizable contributions to our American political vocabulary. But those words were not unique to the nineteenth-century president. The image of a “house divided,” or a family in conflict, was a timeless one that drew on a long tradition in literature and political thought. From the Bible to Greek tragedies to Shakespeare’s works to the political theories of John Locke, the family has offered a common language for understanding the complexities of human relationships. For Lincoln, the family provided a rhetorical shorthand, allowing him in just six words to convey what slavery might do to the relationship between Northern and Southern citizens.
Lincoln was not alone in describing a nation in family terms. Historians across the globe have uncovered numerous moments in which family language and metaphor figured centrally in the imagining of nations—particularly nations in conflict. We can see this in the French Revolution, Russian propaganda during World War I, and the Cold War, to name a few examples. The widespread use of the family image raises important questions about national identity—where it comes from, how it is defined, and how attachments to family and nation coexist and reinforce one another. In the United States we can trace the roots of the family metaphor at least to the Revolution, as colonists imagined themselves as children of a tyrannical British father. The Civil War only amplified this association of nation and family with an outpouring of speeches and stories that joined Lincoln in comparing the nation to a divided house. Even today, in movies, Web sites, children’s literature, and John Jakes novels, we continue to see the warring nation as if it were a quarreling family—or a war of “brother against brother.” It has become a cliché, easily recognizable and frequently invoked. Less understood is why this image has taken root in American culture.
This book offers the first sustained historical study of the divided family in the American Civil War. It takes what we often consider to be just rhetoric or common sense and finds within this image something more meaningful for those who lived through the war. It was meaningful, on a profound level, because it was real. Thousands of families did divide in what was widely considered to be a shocking dimension of the Civil War. Brothers did fight brothers; even Abraham Lincoln had relatives in the Confederate army. The image of the divided family therefore captured something tangible and authentic about the experience of war. But, on another level, it offered to Civil War Americans a framework for making sense of new and unprecedented problems. How could a country that was once one nation be carved into two? How could fellow citizens kill one another? Americans looked to the vocabulary of family—deference and authority, affection and conflict—for guidance in framing those difficult questions. This book follows the interplay of these two levels—experience and language—to provide a social and cultural history of the divided family in Civil War America.
We need not reach far into the vast library of Civil War history to find evidence of divided families. The idea that two brothers, or a father and son, or a husband and wife could assume opposing stances in the war has both captivated and perplexed scholars, writers of fiction, and filmmakers since the first shots were fired over Fort Sumter. Family division has become one of the “curiosities” of the war, filling out war narratives with colorful images and dramatic flourishes. Stories of divided families almost always appear in some form in anecdote books, a staple of Civil War popular culture, under titles such as “Love and Treason” and “‘Brother against Brother’ Was Real. Biographies of some of the most prominent Civil War political and military leaders rarely fail to mention a personal connection to the enemy side. Many central figures of that era were split from a family member, including Confederates Robert E. Lee, Thomas J. “Stonewall” Jackson, and their Union-sympathizing sisters, and U.S. Senator John J. Crittenden and his Confederate son. Indeed, the more one looks for evidence of divided families in the war, the more numerous they appear. |
Why do this activity?
This activity provides a context for practising finding a small difference between two two-digit numbers. It encourages children to record their results, notice patterns and make predictions.
The children may need reminding about what it means to find the difference between numbers, and some strategies for doing this.
A possible starting point for the activity is using the same idea but with single digit numbers, perhaps starting with 7 and 9 on the bottom first. This is an opportunity to model how to find the difference and also how to write this difference into the circle on top.
As a class, you could explore other possibilities for having a difference of 2 starting with just single-digit numbers (1 and 3, 2 and 4 …) and as you collect the results in, they could be organised to enable the children to notice the patterns.
As children attempt the main task, they may explore different ways of having a difference of 5 (for example) within the twenties (21 and 26, 22 and 27 …) or across different tens (21 and 26, 31 and 36, 43 and 48). Encourage the children to record their work and spot patterns as in the introduction. This sheet of blank
may be useful.
Some children may move onto the extension task (below).
What is the difference between the numbers?
Do any other pairs of numbers have the same difference?
Can you spot a pattern?
: This time there are three numbers at the bottom of the pyramid (all with the same number of tens) but otherwise the pyramid is constructed in exactly the same way.
3 is the difference between 24 and 21.
5 is the difference between 21 and 26.
2 is the difference between 3 and 5.
Children could make some pyramids of their own. Here is a sheet of blank 'three tiered' pyramids
: Can learners find different pyramids with the same number at the top?
A 100 square or number line/track. |
Famine came to the land of Egypt and all around, just as Joseph had described. During the good years he put away food. When famine hit, the people of Egypt and the surrounding countries came to him to buy food. Jacob sent ten of his sons down to Egypt to buy food, also. This is how Joseph’s brothers, other than Benjamin, came to stand before him in Egypt. Joseph recognized his brothers, but they did not recognize him. Joseph called them spies and put them in jail. It was in jail that Joseph’s brothers talked about how they believed that what was happening to them was a consequence of how they had treated Joseph.
Discuss with your children:
- Do actions have consequences?
- What are some of the consequences for not acting nicely to someone?
- What are some of the consequences for acting nicely to someone?
- Think of something about your life that is not good. Make a plan how to change this misfortune or difficult aspect into something positive.
- Write the story of Joseph as you think it might have played out if Joseph’s brothers had just teased him instead of selling him when he came looking for them.
- Joseph went through quite an ordeal before becoming second to Pharaoh. Do good things often come from bad? Is it possible that good always comes from bad but we don’t realize it? Write an essay explaining what you think giving examples from real life.
- Find and execute a science experiment where you can actually see a chain reaction, (which is another way of saying the consequences of what took place.)
- We know that Joseph ended up being an important leader carrying out a very important plan. Make a list of as many leaders as you can that had a hard childhood. Pick one of these leaders and write about him or her.
- Joseph’s brothers said that they did not pay any attention when he was suffering.
- Make a list of places in your city where people are suffering.
- Make a list of places in the world where people are suffering.
- Think of something that you can do to alleviate even a tiny bit of the suffering in one place on one of your lists.
- Write a journal entry for Joseph after he overheard his brothers. |
for National Geographic News
An extensive archaeological excavation has unearthed a lost city that is believed to be one of the crowning jewels in the ancient civilization of the Maya.
For six years, researchers have deciphered hieroglyphics and scrutinized palaces in Guatemala's remote Piedras Negras, near the Mexican border. The study shows a city that began as an agricultural center as early as 400 B.C. and disintegrated under royal power struggles around 1,400 years later, around the same time the entire Mayan civilization began to collapse.
"We were able to basically write the biography of a city," said Stephen Houston, an archaeologist at Brigham Young University in Provo, Utah, and one of the lead researchers. "It's a persuasive narrative about how a city grew, how it thrived, and how it died."
Houston's research was partially funded with a grant from the National Geographic Society Committee for Research and Exploration.
The cause of the sudden demise of the great Maya society, which once ranged from Mexico's Yucatán peninsula to Honduras, is fiercely debated by Maya experts. This latest research suggests the culture collapsed not from drought, as some experts believe, but from the loss of the royal court.
"The city came to a catastrophic end in about 800 A.D. when the last known king of the site was taken captive by a neighboring kingdom," Houston said. "Once the king and his royal court are gone, the city's reason for existence no longer seems to be there."
Loggers that came to harvest tropical hardwood discovered Piedras Negras in the 1880s. In the 1930s, archaeologists from the University of Pennsylvania in Philadelphia began studying the site, but World War II interrupted the research, and for almost 60 years no archaeologist went back.
Continuing the excavations took on added urgency after the Mexican government announced plans to build a dam that would flood part of the site, which is situated along the Usumacinta River.
But before Houston and his team could return to Piedras Negras, they first had to convince Marxist guerrillas, who used the site as a hideout in Guatemala's long-running civil war, to leave. They also had to decide how to reach the site: a five-hour hike through the bush from Mexico or a nine-hour boat ride down some hair-raising Guatemalan rapidsno easy feat for a team bringing in 120 workers.
When the archaeologists finally began their work in 1997, they were amazed at how well-preserved the site was. Still, to the untrained eye, it didn't look like much. While some architecture is still standing, most is in ruins.
"Walking around, a person may not realize he's on a major archaeological site," said Houston.
SOURCES AND RELATED WEB SITES |
Introduction to LIGO & Gravitational Waves
The Crab Nebula is the cloud of debris from a
supernova (exploding star) observed by humans in 1054. At the center of
this cloud is a pulsar (a kind of neutron star) that is the remains
star that exploded). Both supernova explosions and pulsars are potential
sources of gravitational waves.
Throughout history, humans have mainly relied on different forms of light
observe the universe.
Today, we are on the edge of a new frontier in astronomy: gravitational
wave astronomy. Gravitational waves carry information on the motions
of objects in the universe. Since the universe was transparent to gravity moments after the Big Bang and long before light, gravitational waves will allow us to
observe further back into the history of the universe than ever before.
And since gravitational waves are not absorbed or reflected by the matter
in the rest of the universe, we will be able to see them in the form in
which they were created. Moreover, we will effectively be able to “see
through” objects between Earth and the gravitational wave source.
Most importantly, gravitational waves hold the potential of the
unknown. Every time humans have opened new “eyes” on the universe, we
have discovered something unexpected that revolutionized how we saw the
universe and our place within it. Today, with the United
States’ gravitational wave detector (LIGO) and its international
partners, we are preparing to see the universe with a new set of eyes that do not
depend on light. |
Hydrogen can be used to power cars that don’t emit harmful pollutants like their traditional internal combustion counterparts. But the cost of producing hydrogen is prohibitive to this technology being widely adopted. That is all about to change with a discovery from researchers in Australia who have discovered a way to produce hydrogen using sunlight and water vapor.
[Image Source: RMIT]
Hydrogen is the ideal energy
The process involves painting a chemical compound onto sun catching roofs and other surfaces. Research paper co-author Kourosh Kalantar-zadeh explains, “Hydrogen is the ideal energy on the planet, if you can turn water into hydrogen, you have an infinite source of energy.” The research was published in the journal ACS Nano and describes how the paint uses sunlight to break humidity down into hydrogen and oxygen.
The current method of creating hydrogen is electrolysis. It is an inefficient industrial scale process of passing an electric current through water. The method requires large amounts of energy to complete, thus making it defeat its own purpose of producing a clean and green energy.
Is this the end of lithium batteries?
The new findings could have a significant impact on the search to replace carbon-based fuels. Hydrogen can be used to fuel vehicles and even rockets when placed under compression. In fact, NASA is the world’s biggest user of hydrogen.
The green gas also has the capacity to store energy, making it a game changer for the development of sustainable fuel cells as well. This could spell the end of harmful lithium batteries. “Lithium ion batteries are the worst in terms of their carbon footprint,” said Torben Daeneke, a research fellow at RMIT University. “They have to be improved a lot to bring their carbon footprints down.”
Hydrogen producing paint can be used on any surface
The newly discovered paint is a combination of the common lubricant molybdenum sulfide and nanoparticles of titanium dioxide. Titanium dioxide is a substance that provides toothpaste and sunscreen with their chalky white color. The material absorbs both solar energy and moisture from the air. It can then further split the water into hydrogen and oxygen allowing for it to be used as a fuel.
“The simple addition of the new material can convert a brick wall into energy harvesting and fuel production real estate,” explained lead researcher Dr. Torben Daeneke.
The paint won’t be commercially available for at least 5 years, but researchers believe once the process is improved it will be very cheap to produce and will suit a variety of climates. “Any place that has water vapor in the air, even remote areas far from water, can produce fuel.”
The paint could be used on any surface from houses to walls to fences. Its ability to turn any structure into an energy clean energy plant is incredible. Once commercialized the paint will join a growing list of products available to consumers to take control of their own power use and generation.
Featured Image Source: TriangleREVA via Flickr |
the highest ordinary division of time. Two years were known to, and apparently used by, the Hebrews.
+ A year of 360 days appears to have been in use in Noah's time.
+ The year used by the Hebrews from the time of the exodus may: be said to have been then instituted, since a current month, Abib, on the 14th day of which the first Passover was kept, was then made the first month of the year. The essential characteristics of this year can be clearly determined, though we cannot fix those of any single year. It was essentially solar for the offering of productions of the earth, first-fruits, harvest produce and ingathered fruits, was fixed to certain days of the year, two of which were in the periods of great feasts, the third itself a feast reckoned from one of the former days. But it is certain that the months were lunar, each commencing with a new moon. There must therefore have been some method of adjustment. The first point to be decided is how the commencement of each gear was fixed. Probably the Hebrews determined their new year's day by the observation of heliacal or other star-risings or settings known to mark the right time of the solar year. It follows, from the determination of the proper new moon of the first month, whether by observation of a stellar phenomenon or of the forwardness of the crops, that the method of intercalation can only have been that in use after the captivity,--the addition of a thirteenth month whenever the twelfth ended too long before the equinox for the offering of the first-fruits to be made at the time fixed. The later Jews had two commencements of the year, whence it is commonly but inaccurately said that they had two years, the sacred year and the civil. We prefer to speak of the sacred and civil reckonings. The sacred reckoning was that instituted at the exodus, according to which the first month was Abib; by the civil reckoning the first month was the seventh. The interval between the two commencements was thus exactly half a year. It has been supposed that the institution at the time of the exodus was a change of commencement, not the introduction of a new year, and that thenceforward the year had two beginnings, respectively at about the vernal and the autumnal equinox. The year was divided into--
+ Seasons . Two seasons are mentioned in the Bible, "summer" and "winter." The former properly means the time of cutting fruits, the latter that, of gathering fruits; they are therefore originally rather summer and autumn than summer and winter. But that they signify ordinarily the two grand divisions of the year, the warm and cold seasons, is evident from their use for the whole year in the expression "summer and winter." (Psalms 74:17; Zechariah 14:18)
+ Months . [MONTHS]
+ Weeks . [WEEKS]
Step 1 - Create an account or log in to start your free trial.
Starting your free trial of Bible Gateway Plus is easy. You’re already logged in with your Bible Gateway account. The next step is to enter your payment information. Your credit card won’t be charged until the trial period is over. You can cancel anytime during the trial period.
Click the button below to continue.
Step 1 - Create an account or log in to start your subscription.
You’ve already claimed your free trial of Bible Gateway Plus. To subscribe at our regular subscription rate of $3.99/month, click the button below.
Now that you've created a Bible Gateway account, upgrade to Bible Gateway Plus, the ultimate online Bible reading & study experience!
Bible Gateway Plus equips you to answer the toughest questions about faith, God, and the Bible. There's no software to install; it's all integrated seamlessly into your Bible Gateway experience. Try it free for 30 days! |
As part of General Washington’s ‘family’ during the American Revolution, the commander immediately recognized not only Alexander Hamilton’s bravery, but his diverse talents in writing, organization and management. The mere fact that Hamilton had gone from abandoned orphan to the staff of the leader in the fight for independence was quite remarkable. So skilled was he in communications and the management of military affairs, it is now recognized that many of the letters to the Continental Congress and other parties signed by Washington from 1777- 1781 were actually written by Hamilton.
Throughout the Revolutionary War, representatives in Congress knew they needed a unifying document that would hold the nation together and focus their energy and resources to accomplish critical goals. The Articles of Confederation were first drafted in 1777 to achieve such a unified effort; they were finally ratified in 1781. Although the Articles provided a unifying theme that allowed the 13 colonies to survive in the struggle for independence, they were deficient in promoting an efficient, well functioning government for America. Well before the Articles were ratified, Alexander Hamilton recognized the many weaknesses and inadequacies within them. He repeatedly expressed the need for strong, centralized authority to successfully steer the course for the new nation.
Several great minds made significant contributions to its replacement- the U.S. Constitution. James Madison, George Mason, Richard Henry Lee, Edmund Randolph, Elbridge Gerry and Roger Sherman were among them. There were men at the Constitutional Convention in Philadelphia who understood law, business, the powers of government, the natural rights of man, and the horrors of war, but there were was only one who had first-hand experience in all these areas. That man was Alexander Hamilton. While Madison was a brilliant legal scholar, Hamilton was the only person at the Constitutional Convention who had written eloquently about the struggle for independence and the need for a strong, centralized government who had also put his life on the line fighting to establish that government. Hamilton’s extensive experience in the military, business, finance, interstate and international commerce, law, organization and management, along with his superb analytical, writing, and debating skills positioned him as a uniquely talented and innovative contributor in the deliberations.
When reviewing the essential participants in the process leading up to the creation of the Constitution, Hamilton and Madison stand among the most influential in framing the debate and in the final arguments for ratification. While James Madison is typically credited as the “Father of the Constitution”, Alexander Hamilton was its guiding beacon. In a letter to James Duane written in 1780- when he was just 23, still serving as aide-de-camp to Washington, Hamilton displayed a remarkable grasp of the defects in the Articles and made numerous detailed proposals on how to resolve them. In this letter Hamilton shows that, seven years before the Constitution was created, he had a strong understanding of how governmental powers needed to be used, along with how to support those powers well before the debate over the concept of “implied powers” had begun. “The confederation too gives the power of the purse too intirely (sic) to the state legislatures. It should provide perpetual funds in the disposal of Congress… for without certain revenues, a government can have no power; that power, which holds the purse strings absolutely, must rule.”
Even though America had won the Revolution, the United States was an economic basket case. The Articles had completely failed with respect to allowing the nation to pay its bills, issue debt, conduct commerce effectively and survive as an ongoing entity. The nation was about to dissolve in its first major financial crisis. Hamilton brought to the Convention a profound understanding that government is in many ways similar to a business. He knew that it must be run efficiently and effectively- or it will cease to exist as an ongoing entity. Among many other talents, Hamilton had a profound understanding of political economy- what makes human beings act for their own self interest- within a framework that is best for society.
We don’t have transcripts of the proceedings from Alexander Hamilton’s pen, but we can make an educated guess about his concentration in helping to craft the final document. Focusing less on states’ and individual rights, Hamilton argued strongly for the supremacy of centralized government authority to resolve the most critical issues of the day, including the gargantuan government debt and generating revenue for operations. Without this emphasis and resolution, the nation would collapse- regardless of a Constitution. George Washington named Alexander Hamilton to the Committee on Standing Orders and Rules and the group which drafted the final document.
Article I, Section 8 of the U.S. Constitution focuses first on the crucial issue of exactly how the government would support its operations- and Alexander Hamilton’s fingerprints appear to be all over it. Ten out of its initial 14 Clauses (71%) discuss issues which Hamilton argued for vehemently over more than six years before the delegates ever met. “The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence…” The importance of Section 8 cannot be overstated: without an ability to funds it numerous functions, there would be no government. After several months of raucous debate and 85 articles termed the Federalist Papers written by Hamilton, James Madison and John Jay supporting it, the founding document for the new nation was finally ratified.
Two years after Hamilton and the other delegates concluded their meeting in Philadelphia, President George Washington asked Hamilton to become the first Secretary of the Treasury. Washington had first requested respected financier Robert Morris to take the job, but Morris demurred, recommending Hamilton instead. Hamilton took office on a momentous date- September 11th- the same day he had aided Washington at the largest land battle of the war. Hamilton served Washington for five years as Treasury Secretary, making bold moves to pay down the colonial debt, get the nation’s financial house in order, regulate commerce and establish a system to generate revenue allowing the government to function. While his moves were often criticized by Jefferson and others, Washington often agreed with Hamilton and moved forward with his recommendations. It is a testament to Hamilton’s abilities that our most revered President trusted him so deeply. Through his policies and initiatives, Hamilton is truly the architect of the American financial system.
After serving his nation for so many years, Hamilton was shot during a duel with Vice President Aaron Burr at Weehawken, New Jersey on July 11, 1804. He died the next day. Hamilton was only 47 years old, yet had accomplished more than almost any man of his generation.
I have an indirect connection to Alexander Hamilton. I live just down the road from the Benjamin Ring House- Washington’s headquarters during the Battle of the Brandywine- where Hamilton served the General during this important conflict. I often reflect upon the fire that burned so brightly in them 239 years ago in a cause that was so uncertain. I wonder if that fire even exists as I encounter kids who’ve graduated from high school who seem to know very little about their own heritage. Without Alexander Hamilton, we would not have the America we know today, a nation that has become the most successful and respected on Earth. As a long-time student of history, I am in awe of what Hamilton, Washington and other patriots accomplished more than two centuries ago. It is my fervent hope that people never forget what these men did… and that the fire will burn for generations to come. For all you have done for our country, I raise a salute. Thank you, Colonel Hamilton.
Gene Pisasale is an historian, author and lecturer based in Kennett Square, Pennsylvania. His nine books and historical lecture series focus on the heritage of the Mid-Atlantic region. Gene’s current project is as a “Living Biographer” of Alexander Hamilton, dressed in full Continental Army officer’s uniform. His website is www.GenePisasale.com. He can be reached at Gene@GenePisasale.com. |
Chapter 15. The Roman Empire. Section 1: The Rule of Augustus (p. 233-234). Lesson Essential Question 1: How did Augustus rule the Roman Empire?. A. What did Octavian do in 27 B. C.?.
The Roman Empire
Lesson Essential Question 1: How did Augustus rule the Roman Empire?
Define emperor: An emperor is an absolute ruler of an empire.B. What did Octavian become?
Augustus held the offices of consul, tribune, high priest, and senator all at the same time.
Augustus knew that most Romans would not accept one-person ruling unless it took the form of a republic.
Augustus wanted the boundaries of the empire to be defendable, so he rounded out the empire to natural boundaries.
Lesson Essential Question 2: What happened during Pax Romana?
It was the 200 years of Roman peace brought by Augustus. While there were some revolts and problems, for the most part, Rome and its people prospered.
Lesson Essential Question 3 – What was daily life like for the Romans?
During the early years, about 1 million people lived in Rome.
One problem was too lived in Rome?little housing.
The air was polluted.
There was crime in the streets.
The cost of living was high.
Many Romans could not find jobs.
They had to pay taxes on almost everything.P. What were the problems facing the city of Rome?
1. The Father
2. Cousins were expected to help one another politically.
The Romans gambled with dice at home.
The games included circuses, chariot races, and gladiatorial games.
The Circus Maximus was an oval arena where chariot races were held. It could seat more than 200,000 people.
Lesson Essential Question 4: Why did the Roman Empire decline?
2. The second reason for Rome’s downfall was economic.
3. The third major reason Rome fell centered on foreign enemies.
Lesson Essential Question 5 – What efforts were made to save the Roman Empire?
Diocletian was the son of a freedman.
Diocletian made the policy rule by divine right for the emperor.
A policy in which an emperors’ powers and the right to rule did not come from the people but from the gods.
Constantine moved the capital from a dying Rome east to a new city called Constantinople in present-day Turkey. Constantinople was wealthier than Rome.
One reason the Germans were able to defeat the Romans was because an invention the Germans borrowed from the Huns, the iron stirrup. |
This essay Bacons Rebellion has a total of 710 words and 4 pages.
"Where we do well know that all our causes will be impartially heard and
equally justice administered to all men," as stated by, Nathaniel Bacon. 1 In
1676 an uprising known as Bacon's Rebellion occurred in Virginia. The immediate
cause of this revolt was the dissension between the planters and the Indians.
Because Sir William Berkeley, the Governor of Virginia had willingly denied
support to the farmers, Bacon assumed leadership of an unauthorized expedition
against the Indians. When Bacon learned that Governor Berkeley was rising a
force against him, he turned away from the Indians to fight with Berkley. This
had now become a serious problem for the governor. When news of this revolt had
reached King Charles II, it alarmed him so that he dispatched eleven hundred
troops to Virginia, recalled his governor, and appointed a commission to
determine the causes of the dissatisfaction. Bacon's Rebellion is considered to
be the most important event in the establishment of democracy in colonial
America because the right to vote and social equality were denied to the farmers
by the local government.
The right to vote is a small but crucial part of the democracy. During
the first half of the 17th century the farmers on the plantations in Virginia
were not able to exercise their right to vote. The only people that were able
to vote during this time were the wealthy men who owned land. Overall the
colonists had not been treated fairly. They had been over taxed and denied
their voting rights. To them voting meant that the person they elected was the
person they felt was responsible enough to motivate them and support them.
Unfortunately Governor Sir William Berkeley was not living up to those standards.
Berkeley did not care about the farmers. It was obvious that the only thing he
cared about was making money. The event that sparked the rebellion occurred
when the Indians attacked the farmers. Normally these farmers were expecting to
receive help from the governor. They became irritated when the governor did not
support them. Through the eyes of the freemen this was seen as a big mistake.
Because the governor did not give them the support, they had to take matters
into their own hands. After defeating the Indians, the unofficially elected
Nathaniel Bacon took charge. When he led his men into town to form an assembly,
it would be the first assembly in fifteen years. After the long struggle and
hard fought battle these freemen received their gift. They were finally able to
appeal a law that denied their right to vote. They were now considered legal
Another important aspect of democracy is social equality. During the
17th century in Virginia, people were either wealthy or poor. The proprietors
held the wealth, while the plantation workers in Virginia were poor. In
addition the farmers had no rights. The freemen had to be taxed and there was
nothing they could do about it. It did not hurt the rich to be taxed because
they could afford it. When the lower-class are taxed, technically you are
taking away at least three-quarters of their earnings. 2 First, half the money
they earn is going to the proprietor, then the King of England takes away the
other quarter they are left with very little. The problems with a society that
has a wide range of classes are certain classes have privileges that other
classes do not. For example the rich people were able to communicate with
Berkeley. The farmers accused the rich men of controlling the whole colony for
their selfish purposes. Rumor had it that Berkeley and his wealthy friends were
interested in trade with the Indians. The frontiers men could only take so much.
Tired of being the poorest people in the colony, they ultimately rebelled.
Bacon and his followers fought for their right to vote and denounced
social inequality in Virginia, by taking matters into their own hands. Bacon's
Rebellion has become the most significant episode leading towards democracy in
colonial America. The consummations of this revolt were acknowledged by the
English as well as the other colonies. Besides influencing governmental
procedures in Virginia, recent research suggests it might have affected English
domestic and foreign policies as well. One researcher claimed that the
concern for representative institutions and the anti-imperialist feeling that
existed in Virginia then was expressed not by the rebels, but by those
suppressing the rebellion and that such democratic attitudes increased
significantly after, rather that immediately before or during Bacon's Rebellion.
Topics Related to Bacons Rebellion
Bacons Rebellion, Colonial Virginia, Jamestown, Virginia, Nathaniel Bacon, Colonial history of the United States, Bacon, Virginia militia, Cockacoeske, sir william berkeley, plantations in virginia, governor berkeley, king charles ii, nathaniel bacon, social equality, wealthy men, governor of virginia, colonial america, dissension, important event, voting rights, dissatisfaction, revolt, colonists, planters, 17th century, rebellion, uprising, farmers
Essays Related to Bacons Rebellion |
The rare disease day is observed on the last day of February every year to raise awareness about the thousands of diseases which are not known to most people.
It is primarily aimed at raising awareness among general public, policy makers and research professionals about these diseases.
These are also known as orphan diseases.
What is a rare disease:
- The definition of a rare disease varies from one country to another. In Europe, a disease is considered rare when it affects 1 in 2000 people, whereas in US a disease is considered rare when it affects more than two lakh Americans
- But a common aspect associated with it is that a rare disease affects only a small section of the population. This essentially means that not much is known about the disease.
- Rare diseases comprise a wide range of disorders and diverse symptoms
- These include lymphocytic leukemia, cystic fibrosis (in which obstruction of the lungs by mucus takes place), Duchenne muscular dystrophy (a case of degeneration of muscles), microcephaly (an abnormally small head) and Tourette’s syndrome (a neurological disorder) to name a few
- 80 percent of these diseases may be caused due to genetic factors
Challenges that arise:
- As the disease affects only a section of the population, limited research is carried out to find treatment for these diseases
- Lack of adequate research leads to incomplete knowledge about the cause and symptoms of these diseases
- This leads to delay in diagnosis and even misdiagnosis in certain cases as the symptoms are similar with that of other diseases in many cases
- Appropriate care and support for patients suffering from these diseases do not exist
Research efforts for finding treatments and timely diagnosis for these rare diseases should be coordinated across the world.
Researchers across the globe should share patient data to ensure that patients across the globe benefit from the pooling of resources across the border. This assumes significance due to the nature of these diseases.
Rare disease patient communities should be formed to spread awareness about these diseases.
Coordinated research will enhance understanding about these diseases, can lead to development of new innovative treatments and improve life of patients and their families. |
A comprehensive illustrated guide to Colonial America. Containing facts and information about the founding and establishment of the 13 Colonies and the struggles of the early colonists.
Learn about the history of Colonial America:
The founding of the 13 Colonies in Colonial America
Life in New England, Middle and Southern Colonies
Religion in Colonial America including the Pilgrims and the Puritans
Trade in Colonial America
Slavery in the American colonies
The Government in Colonial America and the tax laws that lead to insurrection and rebellion and the revolution
American History of the first 13 Colonies and how they became the United States
The Colonization of America and the fascinating events that led colonists and patriots down the Road to Revolution
Colonial America Learn about Colonial America with simple, clear and easy to read articles that all have interesting illustrations and pictures. Read about the relationship between the monarchy and government of Great Britain. The Triangular Trade that was established across the Atlantic and the laws that governed trade in Colonial America including the Navigation Acts, the Sugar Act, the Townshend Acts and the Stamp Act which led to insurrection and rebellion in Colonial America culminating in the American Revolution and the Declaration of Independence. If you want to know what happened next, check out American Historama - United States History.
Early American History - The History of Colonial America Interesting facts and information about the early American History and the Discovery of America and the history of Colonial America from 1600 - 1799 including the Mayflower, Pilgrim Fathers, 13 Colonies, Indentured Servants, Black American Slavery in Colonial America, Colonial taxes, Sons of Liberty & the Boston Tea Party. Facts and info about the British colonies in America for kids. Just click on a link for access to any of the following topics which all relate to early American History from 1607 - 1776 - the history of Colonial America followed by the Battles of the American Revolutionary War that ended in 1783.
The History of Colonial America The History of Colonial America led directly, or indirectly, to many of the early Indian wars. European political rivalries and military conflicts spilled over the Atlantic and into Colonial America. English laws were introduced to Colonial America and the profits made from trade in the natural resources of America contributed to the wealth of England, France, Spain and the Netherlands. Tensions rose between Native American Indians due to the massive influx of English, Irish, German, and Scottish colonists. The early Indian Wars were fuelled by the newcomers to America sparking new conflicts with local Indian tribes and colonists for control of the land. Learn about the most important events in the history of Colonial America.
Colonial America Time Period Definition - American History 1607-1776 Colonial America Time Period Definition: The Colonial America Time Period of American History covers the start of European settlement with the Pilgrims from England landing at Plymouth in 1607 and the establishment of the thirteen colonies until they declared themselves independent on July 4, 1776 with the Declaration of Independence. The Declaration of Independence was a Statement of the Second Continental Congress that defined the colonists' rights, outlined their complaints against the British government, and declared the colonies' independence, which ended the Colonial America Time Period and this period of early American History.
Colonial America - English or British? Colonial America - There is some confusion as to whether the early settlers in Colonial America should be referred to as English or British. To clarify, the colonists and settlers in Colonial America are referred to as English up to 1707 at which time the union between England and Scotland created Great Britain when the term British was used during this era of in Colonial America.
Colonial America Time Period The Colonial America Time Period covers the time in American history from 1607 to 1776. During this period of time the settlers arrived from Europe looking for religious freedom, land and the opportunity for wealth. The newcomers were governed by the laws of the sovereign states of Europe which inevitably led to dissension and rebellion in Colonial America. We have compiled a comprehensive Colonial America Time Period History Timeline which details the a chronological list laws in date order that were enforced, the rebellions that were sparked and the establishment of the 13 colonies. The Colonial America Time Period covers 169 years. The Colonial America Time Period History Timeline is the fastest and easiest way to gain a full overview of the major events and people involved in the Colonial America Time Period.
Colonial America - The Pilgrim Fathers Colonial America - The Pilgrim Fathers is a term commonly applied to the earliest immigrants and settlers of Colonial America. The Pilgrim Fathers, or Pilgrims, were members of a Puritan Separatist sect who set sail in the Mayflower bound for the Americas to establish a colony where they could enjoy religious freedom. The Pilgrim Fathers, or Pilgrims, founded the colony of Plymouth in New England in 1620, located in present-day Plymouth, Massachusetts, United States. The Mayflower Compact was a legal document written by the Pilgrims to specify basic laws and social rules for their new colony in Colonial America.
Colonial America - Early Colonial America The Early Colonial America Time Period starts with the arrival of the Pilgrims on the Mayflower and extends to 1629 which started the Great Migration which was the mass migration of thousands of English people to the Americas that took place between 1630 and 1640. The Early Colonial America Time Period covers the founding and establishment of the first English colonies established in Colonial America, their struggle to survive and their conflicts with American Native Indians in Colonial America.
History of Colonial America - The 13 Colonies, Map and 13 Colonies Timeline The History and Colonial America Time Period covers the establishment of the 13 Colonies. The English settlement of the 13 Colonies were located on the Atlantic coast of North America and founded between 1607 (Virginia) and 1733 (Georgia). The 13 colonies were Delaware, Pennsylvania, Massachusetts Bay Colony (which included Maine), New Jersey, Georgia, Connecticut, Maryland, South Carolina, New Hampshire, Virginia, New York, North Carolina, and Rhode Island and Providence Plantations. The 13 English colonies were divided into three geographic areas consisting of the New England colonies, the Middle colonies, and the Southern colonies. Each of the 13 Colonies had specific economic, social, and political developments that were unique to the regions in Colonial America. The articles on this subject include a helpful 13 Colonies Timeline.
Colonial America - The Early Indian Wars Colonial America - The early Indian Wars were fuelled by the newcomers to Colonial America sparking new conflicts and wars between local Indian tribes and colonists for control and the wealth of the land in Colonial America. Tensions rose between Native American Indians due to the influx of English, Irish, Scottish, French, Dutch and German colonists.
Colonial America - Indentured Servants Colonial America - The system of Indentured servitude was introduced in Colonial America to meet the growing demand for cheap labor in the colonies. Indentured servants were contracted to work for a fixed period of time usually from five to seven years in exchange for transportation and a job in Colonial America. The Indentured servants were provided with basic necessities such as food, clothing and lodging during their term of service but they were not paid. Unlike slaves, Indentured servants could look forward to a release from bondage. The first Indentured servants in Colonial America were introduced by the Virginia Company in 1619.
Colonial America - Slavery in Colonial America The subject of Slavery in Colonial America starts in England in 1562 when England joined the Slave Trade. John Hawkins, the first Englishman to take part in the slave trade, and many followed his lead due to the huge profit that could be gained. Slavery was common in the sugar plantations of the Caribbean. At first there was no slavery in Colonial America. The Indentured servants system was introduced but these people were given their freedom at the end of their service. The colonists started to produce tobacco in the colonies which was hugely profitable. To increase profits further slavery in Colonial America was introduced to provide cheap labor on the tobacco plantations. Twenty black African slaves were brought to Jamestown, Virginia in 1619 by 1790 Slavery in colonial America had increased. Nearly 1 in 4 of the population were black African slaves in Colonial America.
African slaves working on a tobacco plantation in 1670
Colonial America - The Early Rebellions Colonial America - The establishment and settlement of Colonial America led to many of the early Indian wars and some early rebellions. Political rivalries and military conflicts in Europe spilled across the Atlantic and into Colonial America. European laws were introduced to Colonial America and the lucrative income from the natural resources of Colonial America contributed to the wealth of England. The were early rebellions in Colonial America including the famous rebellion of Nathaniel Bacon (1675-l676), referred to as Bacon's Rebellion which resulted in the Declaration of the People.
Colonial America - Picture of George Washington during the French Indian Wars
Colonial America - The Proclamation of 1763 The Colonial America Time Period moved forward to see the end of the French and Indian War, victory for the British in Colonial America and the Proclamation of 1763 and the establishment of the controversial Proclamation Line along the Appalachian Mountains which safeguarded Indian lands and repaying the Native American Indians who helped the British during the war.
Colonial America - King Philip's War Colonial America - The King Philip's War (June 1675 to August 1676) was a bitter and bloody conflict between the Algonquian speaking Indian tribes and the English settlers of the New England colonies. More than half of New England's 90 towns were assaulted by Native Indians. King Philip's War ended in victory for the colonists almost one out of every twenty people in the region, both whites and Indians, are killed. Over 600 colonists and 3,000 Indians were killed during King Philip's War and Indian captives were sold into slavery in Colonial America.
Colonial America - Taxes imposed by Great Britain Colonial America - The British colonists in Colonial America were becoming increasingly incensed by the demands made and taxes required by Great Britain. In 1764 the Sugar Act which was a Law passed by the British Parliament set a tax on sugar and molasses imported into the colonies. The Stamp Act of 1765 was a direct tax imposed by the British Parliament specifically on the colonies of British America. This act placed a stamp duty, or tax, on legal papers, newspapers, pamphlets, and even playing cards in Colonial America. Vehement opposition by the Colonies resulted in the repeal of the act in 1766. This was followed by in 1767 by the Townshend Acts which were a series of Laws passed by the British Parliament placing new duties, or taxes, glass, lead, paints, paper and tea imported by the colonists. The reaction from Colonial America was so intense that Great Britain eventually repealed all the taxes except the one on tea.
Colonial America - The Sons of Liberty Colonial America - In 1765 the Sons of Liberty was formed. The Sons of Liberty was an an organization (a secret society) formed by American Patriots who opposed British measures against the colonists, and agitated for resistance. The Sons of Liberty was initially formed to protest against the Stamp Act but the patriots continued to speak, write and demonstrate against British measures in Colonial America until the Declaration of Independence in 1776.
Colonial America - The Boston Tea Party Colonial America - In 1773 the Tea Act was a Law passed by the British Parliament allowing the British East India Company to sell its low-cost tea directly to the colonies which undermined the colonial tea merchants. The anger regarding the Tea Act led to the infamous event called the Boston Tea Party. The Boston Tea Party occurred on December 16 1774 when the Massachusetts patriots, dressed as Mohawk Indians, protested against the British Tea Act.
Colonial America - The Speech of Patrick Henry Colonial America - On March 23, 1775 Patrick Henry delivered his famous speech in St. John's Church in Richmond, Virginia. Those who heard the speech were motivated to take up the cry of "Give Me Liberty or Give Me Death!". The famous speech helps to convince the Virginia House of Burgesses to pass a resolution delivering the Virginia troops to the Revolutionary War.
Colonial America - The Declaration of Independence
Colonial America - The Declaration of Independence Colonial America - The Declaration of Independence was made declaring the US to be independent of the British Crown. The Declaration of Independence was signed on July 4, 1776, by the congressional representatives of the Thirteen Colonies, including Thomas Jefferson, John Adams, and Benjamin Franklin who were instrumental in bring about major event in Colonial America.
Colonial America By the 1770's more than 2 million people lived and worked in Great Britain's 13 colonies of Colonial America.
The history of the Colonial America for kids
History timeline of Colonial America
Interesting Facts and information on the Colonial America for kids and schools
Summary, dates and history of the Colonial America for kids
Colonial America - Pictures and Videos of Colonial America Colonial America. Discover the the key years, famous people and events of the Colonial America together with the causes and effects of the war, conflict and battle. Pictures have been include wherever possible which show the battlefield, clothing and weapons of the Colonial America and their leaders who fought in the Colonial America. We have included pictures and videos to accompany the main topic of this section - The Colonial America. The videos enable fast access to the images, paintings and pictures together with the information and the many facts featured on this subject of the Colonial America. |
For larger amplitudes, the amplitude does affect the period of the pendulum, with a larger amplitude leading to a larger period. However, for small amplitudes (typically around a few degrees), the amplitude has no effect on the period of a pendulum.Continue Reading
In a simple pendulum, which can be modeled as a point mass at the end of a string of negligible mass and a given length, the amplitude is normally only a few degrees. When the amplitude is this small, it does not affect the period of the pendulum. The period simply equals two times pi times the square root of the length of the pendulum divided by the gravitational constant (9.81 meters per second per second).
For a real pendulum, however, the amplitude is larger and does affect the period of the pendulum. When the amplitude is larger than a few degrees, the period of the pendulum becomes an elliptical integral, which can be approximated by an infinite series. The series includes terms with the amplitude squared, the amplitude to the fourth power, the amplitude to the sixth power, and so on. Therefore, the larger the amplitude, the more nonnegligible terms appear in the series. As the amplitude of the pendulum increases, the period increases.Learn more about Motion & Mechanics |
Some in secular circles would suggest that history has come full circle. To them, the human rights movement is the product of the Enlightenment and, as such, part of a determined attempt at reducing the power of religion over state and society. Today, however, it is resurgent religious movements that are challenging the place of human rights.
In some countries, in France in particular, the history of the human rights movement is intimately linked to laicité (secularism), to the roll back of the Catholic Church and the separation between church and state. The Dreyfus affair at the end of the 19th century was the symbol of this clash and the founding moment for the French League of Human Rights (Ligue des Droits de lHomme, LDH). The controversy around the role of the official Church in supporting Petainism8 during the Second World War deepened this mutual suspicion. In Spain, the ideological marriage between the Catholic Church and the Franco dictatorship generally led, until the early sixties, to a chasm between the democratic opposition and Catholicism.
History, however, also tells another story. In other countries religion was the prime mover behind campaigns for human rights. The role of U.S. and English Protestant churches in the anti-slavery campaigns, in the Congo reform movement,9 and in solidarity with Armenian victims in the late days of the Ottoman Empire belong to the best chapters of the history of the human rights movement.10 The social teachings of the Catholic Church in the late 19th century also created a context that allowed committed Christians to press actively for social justice and contributed to the development of strong labor unions and mutual help associations that fought for social and economic rights.
In South Asia, Hinduism was the inspiration of Mahatma Gandhis long march for the liberation of India. Since the occupation of Tibet by China in 1949-51, a religious figurethe Dalai Lamahas been guiding the Tibetans struggle for freedom, pushing for a democratic, self-governing Tibet in association with China.
In the 1950s and 1960s the human rights movement grew in part thanks to the involvement of leading religious groups and individuals. Although the Church took a cautionary approach, Catholic intellectuals (first among them Catholic writer par excellence François Mauriac), journalists, and activists played a prophetic role in the fight against the use of torture and disappearances by the French army in the Algerian war of independence, invoking their faith to combat what they considered brutal attacks against human dignity.
The civil rights movement in the United States was powerfully inspired by religious figures, among whom Martin Luther King, Jr., stands as an icon, and was in many cases supported by mainstream Christian and Jewish denominations.
After the 1964 military coup in Brazil a significant part of the Catholic Church, centered around Bishop Dom Helder Camara, inspired by the teachings of the Second Vatican Council (1962-1965) and of mainstream Protestant denominations, became a vibrant defender of human rights. Political coups in Bolivia, Chile, and Uruguay in the 1970s and civil wars in Central America in the 1980s often placed the official Church, or at least some of its most powerful voices, on the side of the human rights movement. The Servicio Paz y Justicia founded in 1974 in Argentina by 1980 Nobel Peace Prize laureate Adolfo Perez Esquivel, the Vicaria de Solidaridad in Chile, and the Tutela Legal in El Salvador were focal points of the human rights struggle.
San Salvador Archbishop Oscar Arnulfo Romeros last sermon in March 1980, with his passionate plea to the army and National Guard to disobey an immoral lawBrothers, you come from your own people. You are killing your own brother peasants when any human order to kill must be subordinate to the law of God which says, Thou shalt not killstands out as one of the most powerful documents of the Latin American human rights struggle.
In the 1980s in the Philippines, the Catholic Church was one of the major actors in the overthrow of the Marcos dictatorship. In Eastern Europe, particularly in Poland with its strong Catholic Church and in East Germany with the Lutheran Churchs support of independent pacifists and dissidents, religious organizations joined in the fight against state authoritarianism and repression. In the 1970s, in the wake of the ratification of the Helsinki Accords,11 Jewish organizations and individuals in particular played a decisive role in Eastern Europe and the USSR in the defense of dissidents and fundamental freedoms of expression, belief, and movement.12
In the 1980s and 1990s, in South Africa, Jews, Christians, and Muslims fought apartheid, in alliance with secular or even Marxist-inspired organizations such as the South African Communist Party and the African National Congress.
During all these decades of struggle and speaking truth to power, the international human rights movement was also strongly inspired by religious figures, like Joe Eldridge, of the Methodist church, director of the Washington Office on Latin America (WOLA): My father always said that we were children of God, he confided. My motivation fundamentally emerges from a religious perspective. Having been given life, I believe that we are called to do things that edify life.13
Maréchal Pétain, a former First World War hero, ruled France during the German occupation. His government, based on an ultra conservative Catholic ideology, collaborated with the enemy and in the deportation of Jews. Although many Catholics took part in the Résistance and the Catholic hierarchy protested the deportations, especially after the July 16 round-up of 12,884 Jews at the Velodrome dHiver, the image of the Church was tainted in many liberal circles.
Adam Hochschild, King Leopolds Ghost (Boston: Houghton Mifflin Co., 1998).
Suzanne Moranian, The Armenian Genocide and American Missionary Relief Efforts, in Jay Winter. ed., America and the Armenian Genocide of 1915 (Cambridge (U.K.): Cambridge University Press, 2004), pp. 185-213.
The Helsinki Accords were the result of the final act of the Conference on Security and Cooperation in Europe held in Helsinki (Finland) in 1975 between the NATO Countries and the Soviet bloc. The civil rights section of the agreement, the so-called third basket, committed the participating states to respect human rights and fundamental freedoms.
See Jeri Laber, The Courage of Strangers: Coming of Age with the Human Rights Movement (New York: Public Affairs, 2002).
Margaret E. Keck and Kathryn Sikkink, Activists beyond Borders (Ithaca, New York: Cornell University Press, 1998), p. 91. |
Please click here to see full details of the inauguration of this important memorial commemorating the abolition of the slave trade in France at the mouth of the Loire River at the city of Nantes. Nantes was by far the leading 18th & 19th century slave port of France – 45% of French slaving vessels sailed from Nantes. The port’s merchant ships were responsible for the transportation of 500,000 enslaved Africans to French colonies in the New World. Bordeaux, La Rochelle and Le Havre were the other three leading French slaving ports each were responsible for 11% of the trade.
The International Slavery Museum at Liverpool has this to say about why Nantes became so dominant in the French trans-Atlantic slave trade:“Although Nantes was some 50 miles from the sea, its position at the confluence of the Loire and the Erdre rivers gave it access to an important hinterland, including Paris. But it was also the main import port for the French Indies Company which gave it easy access to Indian cloths. Good international trading connections were needed for other items – it got guns from England and beads and cowries came through Amsterdam. But Nantes had one significant advantage over Liverpool: it had its own textile industry producing fine quality printed cloths which were much in demand in West Africa. These indiennes, produced from 1759 onwards, became an important local industry. By 1780 there were a dozen factories, employing 4,500 workers, all producing cloth almost exclusively for the trade to Africa. Nantes also produced alcohol – the local eaux de vie – as well as swords and knives.” |
In 1981, James Hansen was the Director of the NASA Goddard Institute for Space Studies. He was also the lead author of a seminal paper published in the prestigious journal Science entitled “Climate Impact of Increasing Atmospheric Carbon Dioxide“.
In the paper, Hansen and his colleagues reported (and illustrated with multiple graphs) the widely accepted 100-year (~1880-1980) record of hemispheric and global temperature changes. At the time, most climate scientists were reporting that the Northern Hemisphere’s (NH) temperatures had undergone a rapid warming of between +0.8 and +1.0¬∞C between the 1880s and 1940. Then, after 1940 and through 1970, NH temperatures were reported to have dropped by about -0.5 to -0.6¬∞C, a decades-long cooling trend which at the time had fomented widespread debate about global cooling in the scientific community.
Like their peers, NASA’s Hansen and his co-authors indicated that the Northern Hemisphere had warmed by ~0.8¬∞C between the 1880s and 1940, and then cooled by ~0.5¬∞C between 1940 and 1970.
A graph of “observed temperature” for the Northern Hemisphere was included in the paper to illustrate these climatic trends.
Today, NASA’s Goddard Institute for Space Studies is directed by Dr. Gavin Schmidt, a trained mathematician. (James Hansen retired from the position in 2013.) Schmidt’s version of the Northern Hemisphere’s temperature record for 1880-1980 looks vastly different than what his predecessor had illustrated in 1981. Instead of leaving the historically observed temperatures alone, NASA has invented new ways to portray the pre-1981 temperature history of the Northern Hemisphere. |
Bald eagles are a majestic and beautiful species of bird, but not many people know much about them other than being a symbol of national pride. Bald eagles have many different unique adaptations to survive and interesting physical characteristics, but do you know how to tell male and female eagles apart? Differentiating between male and female bald eagles can be a difficult undertaking because neither has any immediately obvious telltale gender markings. There are, however, multiple telling, albeit subtle, differences.
According to the University of Saskatchewan, female bald eagles are larger than their male counterparts by roughly a third in size. Females tend to have a body length of 35 to 37 inches, while males tend to be 30 to 34 inches long.
Another method recommended by Journey North, a bald eagle education initiative, is to look at the length of the talons, or toe claws. Both male and female eagles have four talons–three in front and one in back. The back-facing talon or hallux tends to be longer on female bald eagles.
The width from the top of the beak to the eagle's chin is the depth of the beak. Female bald eagles generally have deeper beaks than male eagles, according to Journey North.
Gender may be determined by comparing the pitch of two eagles, Bald Eagle Info explains. Females tend to have a lower pitch than males and can be identified this way.
Gender Identification in Young Eagles
Dr. Gary Bortolotti, in his research with the University of Saskatchewan, determined that the difference between male and female bald eagle nestlings wasn't readily apparent until around 20 days of age. He found that the most accurate way to differentiate between genders was to measure the length of the eagles' foot pads and the depths of their beaks. |
| According to the
American Heritage® Dictionary of the English Language (© 2000), integration
means "the bringing of people of different racial or ethnic groups into unrestricted
and equal association, as in society or an organization; desegregation."
Up until relatively recently in our history, integration strongly implied that persons of different racial or ethnic backgrounds who choose to reside in the U.S. would elect to adapt to U.S. cultural norms and customs, including learning our English language.
However, today integration has become a dirty word. Today, civil rights leaders have abandoned the principle of integration in favor of divisive, separatist policies which specifically grant unequal, preferential treatment to their constituencies (black, Hispanic, Muslim, etc.). These race merchants encourage people not to integrate into our society, not to adopt and respect our cultural values, but rather to remain apart from our U.S. culture in their own, voluntarily segregated enclaves.
A closely related term is melting pot which referred to the process whereby immigrants of Italian, Irish, German, Jewish, French and a host of other cultures actually assimilated into our U.S. culture, learned our language, and played a vital role in making the U.S. the strongest economy in the world.
Civil rights leaders today almost never use the term integration. And they actively detest the terms melting pot and assimilation. Instead, they have invented a litany of anti-integration terms including Multiculturalism, Disparate Impact, Diversity, and Race-Sensitive.
RELOAD Main Terms and Definitions Page
Close Frames and Return to Adversity.Net Home Page |
The Proclamation of 1763 was passed for one specific reason. (Please note I edited this question to reflect the correct year.) After the French and Indian War ended, the British gained huge amounts of land east of the Mississippi River from France. The colonists were very excited about this opportunity to get land. There was plenty of land available, and it wasn’t expensive. Land ownership was very important to the colonists. Sometimes, a person had to own land in order to be able to vote. However, the British were very concerned about attacks from the Native Americans. The Native Americans had joined with France in the fight against the British. The Native Americans did not want the British or the British colonists to move to these lands. The threat of violence was very high. Thus, the British passed this law to protect the colonists from attack by the Native Americans. The colonists didn’t like this law because it restricted opportunities for them to get land. This was the first of many laws that put the colonies on a collision course with Great Britain, eventually leading to the Revolutionary War.
The Proclamation of 1763 was passed by King George III, prohibiting the settlement of colonists past the Appalachian Mountains. The main reason for its existence was so that conflict and war between Native Americans and the colonists could be prevented. |
Virtual twins are children who are raised together, but have no genetic relationship. They are also very close to each in age, with most researchers defining virtual twins as children less than nine months apart in age. They can come into a family in a wide variety of ways, and they are a topic of interest to psychologists and other researchers, as they can be used to delve into the relationship between environment and genetics.
As scientific subjects, virtual twins provide a rich pool of material for researchers tackling the nature-versus-nurture question. Raised together essentially from birth, or at least since infancy, virtual twins may be genetic strangers, but they share an environment from an early point in life.
In a classic example of virtual twinning, a couple makes arrangements to adopt after struggling to have children, and then becomes pregnant. Rather than backing out of the adoption, the parents may choose to adopt as well as giving birth, giving their birth-child an adopted sibling. Virtual twins can also be created through adoption, with parents adopting two children of different parentage. Many researchers like to focus specifically on virtual twins adopted at a very young age, rather than older children adopted together.
For people interested in the nature vs. nurture argument, virtual twins can provide some interesting food for thought. Researchers who believe that environment plays a larger role than genetics would expect virtual twins to be very similar, since they are raised in the same environment. Studies suggest that they have fewer similarities than true genetic siblings, however, which suggests that genetics plays a heavy role in human development.
Although it is difficult to quantify the phenomenon, researchers say that virtual twins are an increasingly common result of Americans having children later in life, facing fertility issues and forming families through a patchwork of channels: adoptions, surrogate births, natural pregnancies and fertility treatments, which can lead to multiple births. Many parents, having struggled with infertility for years, pursue several avenues at once to increase their chances of having at least one child. If two adoptions or an adoption and a pregnancy work out at about the same time, the stage is set for virtual twins.
Peggi Ignagni of Oberlin, Ohio, had been trying to become pregnant for nine years when she and her husband, Tony, applied for a foster-care license, hoping they could adopt an infant after taking him into foster care. They got Nickholas when he was 3 days old but decided to proceed with in vitro fertilization, fearing that they might not be able to keep the boy. The fertility treatment worked, and Ignagni became pregnant with triplets who were born eight months after Nickholas. She and her husband, who owns a medical device company, now have four 6-year-olds. “At least they were all potty-trained within the same week,” she said.
In another case, Sara was adopted at birth by Deborah and Dave Curry, who are both retired from the Navy. The couple had tried having children for almost four years before they arranged for a private adoption. Deborah Curry became pregnant with Julie a month after Sara’s birth mother chose them as parents. When they left the hospital with Sara, they were stopped by a security guard and asked to explain why they were leaving with a newborn when Deborah Curry was obviously still pregnant. |
The Age of Enlightenment was a period in early modern history when western societies, led by its intellectuals, made a marked shift from religion based authority to one of scientific reason. Prior to this period, the Church and the State were intricately interlinked; and the Enlightenment sought to sever states and politics from religion through the application of rational analysis based on scientific observation and facts. This movement traces its origins to the seventeenth and eighteenth century Europe. Similar undercurrents of scientific expression were seen in the New World as well, most notably from such intellectuals such as Tom Paine and other proponents of American independence. The rest of the essay will foray into the wider implications of the Enlightenment and try to capture its significance to the academia of today.
The Enlightenment has had a profound impact on the cultural evolution of Western Europe in particular and the whole of the continent in general. A landmark piece of scholarship that turned the tables in favor of scientific reasoning is Newton’s analysis and description of natural physical phenomena. The immediate impact was discernible in written literature of the day, due to the scope of this medium of art (Brians, Paul, 1998). On the other hand, it took longer for ideas of the Enlightenment to penetrate into art forms such as music and painting due to the emphasis on traditionally acquired technique in these art forms. While it is difficult to categorize the newly evolving artistic manifestations of the time, a few broad trends could be noted. For example,
“At the opening of the century, baroque forms were still popular, as they would be at the end. They were partially supplanted, however, by a general lightening in the rococo motifs of the early 1700s. This was followed, after the middle of the century, by the formalism and balance of neoclassicism, with its resurrection of Greek and Roman models. Although the end of the century saw a slight romantic turn, the era’s characteristic accent on reason found its best expression in neoclassicism.” (Hackett, 1992)
As mentioned before, this rise of neoclassical artistic expression found its highest glory in the Literature of the day. All forms of literature, ranging from prose, narrative verse, poetry, plays, etc were infused with newly discovered scientific truths and newly evolving systems of natural philosophy. Such luminaries as Alexander Pope, Phyllis Wheatley, Voltaire and Jonathan Swift among others were at the forefront of this paradigm change in socio-cultural expression. A special mention has to be made on the role of the Novel in this epoch making age. The broad scope of intellectual discourse offered by the Novel was utilized very cleverly and ingenuously by such writers as Daniel Defoe, Samuel Richarrdson, Henry Fielding, Aphra Behn, Fanny Burney, etc (Paul Brians, 1998).
Given the revolutionary change in the cultural landscape that the Enlightenment affected, it is easy to see its relevance to the academia of today. In many ways, the academia is burdened with the legacy of the Enlightenment, in that, modern societies have come to expect radical theories and systems of thought to emerge from the confines of a University. Also, most universities are adequately resourced in terms of comprehensive libraries and other resources. These factors make the modern academic environment the most suitable place for the continuation of the legacy of the Enlightenment (Hackett, 1992).
The area where the ideas of the Enlightenment made radical changes was in the realm of political thought and systems of civil administration. It has to be remembered that most geographical regions of the day were part of one kingdom or the other and totalitarianism in the form of monarchy was the accepted social order. The transformation from this oppressive political system to modern forms of democracy, as evident today, has to be attributed to the Enlightenment. Some of the most prominent thinkers who helped shape this new political consciousness were Diderot, John Locke, Thomas Hobbes, Rousseau, Adam Smith, etc. The following passage gives a concise account of Rousseau’s contribution to modern political thought:
“Physical, intellectual and economic equality are beyond human remedy. The state, according to Rousseau can interfere with property only if legal and moral equality is jeopardized. In his book Emile he explains that the young must learn the compulsion of things but be protected from the tyranny of men. All must obey the general will as a law of nature, not as an alien command but because of necessity. This is only possible if society makes the laws which it obeys. Hence a radical political and social revolution is necessary. He demanded man’s mastery over nature and projected a moral rationalism.” (Gerhard Rempel, Age of Enlightenment) |
The most detailed study yet of the Greenland ice sheet illustrates the complex process that is causing billions of tonnes to melt ever year.
LONDON, 27 December, 2014 − Greenland’s ice sheet shrank by an average of 243 billion tonnes a year between 2003 and 2009 – a rate of melting that is enough to raise the world’s sea levels by 0.68 mm per year.
In what is claimed as the first detailed study, geologist Beata Csatho, of the University of Buffalo in the US, and colleagues report in the Proceedings of the National Academy of Sciences that they used satellite and aerial data to reconstruct changes in the ice sheet at 100,000 places, and to confirm that the process of losing 277 cubic kilometres of ice a year is more complex than anyone had predicted.
The Greenland ice sheet is the second biggest body of ice on Earth − second only to Antarctica − and its role in the machinery of the northern hemisphere climate is profound.
It has been closely studied for decades, but such are the conditions in the high Arctic that researchers have tended to make careful measurements of ice melt and glacier calving in fixed locations – in particular, at four glaciers − and then try to estimate what that might mean for the island as a whole.
“The great importance of our data is that, for the first time, we have a comprehensive picture of how all of Greenland’s glaciers have changed over the past decade,” Dr Csatho said.
The study looked at readings from NASA’s ice, cloud and land elevation satellite ICESat, and from aerial surveys of 242 glaciers wider than 1.5 km at their outlets, to get a more complete picture of melting, loss and – in some cases – thickening of the ice sheet as a whole. |
(Choose the best answer. 3 points each. Answers at the end.)
1. In 1866 the U.S. merchant ship General Sherman defied the laws of Korea (then pursuing a policy of strict isolation) by entering Korean waters, and sailing up the Taedong River towards Pyongyang to demand trade. What happened to the ship?
a. It was attacked by local people and soldiers, burned, and sunk, with the loss of its entire crew.
b. Its crew was politely told that since Korea was a satrapy of China all negotiations concerning commerce had to take place via Beijing.
c. It was welcomed, and Korean officials began discussing with the Americans a Treaty of Amity and Commerce.
2. In 1882 the Korean government signed a treaty with the U.S. It is usually considered an “unequal treaty” like those signed with China and Japan. Its provisions included:
a. extraterritoriality (exempting U.S. citizens from Korean law and courts); U.S. rights to export opium to Korea; and the establishment of a U.S. legation
b. leasing of land for a legation; a most favored nation clause (assuring that no other foreign country would receive better treaty conditions than the U.S.); and the Korean renunciation of Chinese suzerainty
c. extraterritoriality; relatively low tariffs on imported U.S. goods; and a most favored nation clause
3. After the Russo-Japanese War of 1904-5, Japan acquired control over Korea, annexing it formally in 1910. In 1905 Japanese Prime Minister Katsura Tar? met secretly with U.S. Secretary of War William Howard Taft, producing the Taft-Katsura Agreement in which the U.S. recognized Japan’s interests in Korea. What did the U.S. receive in return?
a. Japanese agreement to limit emigration to the U.S.
b. Japanese recognition of U.S. colonial rule over the Philippines.
c. Japan’s renunciation to all claims to the Hawai’ian Islands.
4. At the Yalta Conference in February 1945, U.S. President Roosevelt and Soviet leader Joseph Stalin discussed the postwar future of Korea. Stalin advocated independence as soon as possible. Roosevelt
a. agreed to immediate independence
b. advocated a trusteeship of 20-30 years, citing the positive example of U.S. rule in the Philippines
c. suggested Korea remain a part of the Japanese Empire, to be occupied by Allied forces
5. In accordance with a wartime agreement that the USSR would enter the war with Japan following the German surrender, Soviet forces invaded Korea in August, advancing to the 38th parallel by August 10. They could easily have occupied the whole peninsula. What did they do?
a. They accepted the Japanese surrender, provided arms to local communist forces led by Kim Il-sung, and withdrew within the year.
b. They consulted with their American allies, who requested that they stop their advance at the 38th parallel, so that U.S. forces could in the next month occupy the rest of Korea. The Soviets agreed to the U.S. proposal.
c. They proclaimed the Korean Soviet Republic and made plans for permanent incorporation into the USSR.
6. In August 1945 defeated Japanese forces formally turned over authority in Korea to the broad-based Committee for the Preparation of Korean Independence, led by Lyuh Woon-hyung, which in September proclaimed the Korean People’s Republic (KPR). When U.S. forces under Gen. Reed Hodge arrived in Inchon to accept the Japanese surrender, they
a. ordered all Japanese officials to remain in their posts, refused to recognize Lyuh as national leader, and soon banned all public reference to the KPR
b. recognized Lyuh as the legitimate head of state
c. negotiated with Lyuh to facilitate swift attainment of independence of a united Korea
7. As of 1945, most Koreans associated the majority of Korean big landowners and businessmen with the Japanese colonial regime. How did U.S. occupation forces deal with this stratum?
a. They subjected it to a thoroughgoing purge.
b. They relied upon it for support.
c. They remained neutral as the numerous “people’s committees” loyal to the PRK organized against it.
8. In August 1948 the U.S.-occupied zone of Korea became the Republic of Korea. The next month, the KPR operating in the north became the Democratic People’s Republic of Korea (North Korea). Around this time there were many revolts against the U.S.-backed authorities in the south led by supporters of the original KPR. Where was the biggest one?
a. on Cheju Island, off the south coast of South Korea, where there was minimal Soviet or North Korean influence
b. along the North Korean border, organized by communist operatives
c. in Seoul, led by communist agitators
9. In June 1950 North Korean forces attacked the South and by September controlled all but the southeastern region around Pusan. What was the reaction of South Koreans?
a. stiff resistance, in support of the popular U.S.-backed Syngman Rhee regime
b. little resistance, and initially much cooperation
c. general apathy
10. The United Nations Security Council approved a U.S. proposal for war on North Korea. Why, when both the USSR and China were on the UNSC, was the proposal passed?
a. At the time, both China and the USSR continued to maintain their World War II-era alliances with the U.S.
b. UN rules did not require UNSC unanimity but only a majority vote to commit the body to war.
c. China’s seat was held by the pro-U.S. Guomindang regime headquartered on Taiwan, and the Soviet delegate was absent when the vote was taken.
11. How many people, military plus civilians, died in the Korean War?
a. 500,000-1 million
b. 1 million-2 million
c. about 4 million
12. How many American soldiers died (officially) in the Korean War?
13. Between 1954 and 1960, how much of South Korea’s government budget came from foreign, especially U.S., aid?
a. about half
b. about one-third
14. Park Chung-hee, who had served in the Japanese army during the Second World War, participated in a coup in 1961, and then became president in 1963. His rule, to 1979, was characterized by
a. economic growth and political liberalization
b. a “sunshine policy” towards North Korea
c. economic growth, martial law, censorship, political repression, and torture of political prisoners
15. The KCIA abducted dissident Kim Dae-jung from a Tokyo hotel in August 1973, intending to drown him. Following a conversation between U.S. Ambassador to Seoul Philip Habib and Park Chung-hee the U.S. CIA sent a helicopter to the Korean spy ship on which he was confined. The CIA
a. demanded his immediate release
b. demanded that he not be killed
c. requested an explanation
16. Park’s political career ended in 1979 when
a. the head of the Korean Central Intelligence Agency (KCIA) assassinated him
b. student protests toppled him
c. his constitutional term as president expired
17. In May 1980, after the the proclamation of martial law, there was a massive uprising in the South Korean city of Kwangju involving tens of thousands. By official estimate, about 200 civilian pro-democracy protestors were killed by military forces; Kwangju residents claim about 2000. Which of the following best describes U.S. behavior during this incident?
a. The Carter administration gave prior approval to South Korean contingency plans to use military units against the protesters.
b. The U.S. cautioned against violence against peaceful demonstrators.
c. The U.S. remained scrupulously neutral during the event.
18. Which of the following South Korean presidents have been convicted of the crimes of corruption, participation in the 1979 coup, and involvement in the Kwangju Massacre?
a. Roh Tae-woo (1987-93)
b. Chun Doo-hwan (1980-87)
c. both of the above
19. Early in his presidency, Jimmy Carter announced plans to withdraw all U.S. troops from South Korea. What happened to this plan?
a. After meeting Park Chung-hee in Seoul in June 1979, Carter announced that U.S. troops would remain, and that the U.S. would expand its security relationship with South Korea.
b. It was abandoned when Carter left office.
c. It was implemented, but troops were returned during Reagan’s presidency.
20. After meeting with Chun Doo-hwan in 1985, President Ronald Reagan
a. praised Chun for his government’s “considerable progress” in “promot[ing] freedom and democracy”
b. mistakenly referred to him publicly as “President Marcos”
c. doubled U.S. aid to South Korea
21. Like many nations, the DPRK has sought in the past to acquire nuclear weapons. It may have produced two as of 1992, during the first Bush administration. The Clinton administration negotiated a deal in 1994 whereby Pyongyang suspended its nuclear program in exchange for oil and the foreign-sponsored construction of two cool-water reactors. What happened to the agreement?
a. It was scrupulously followed by both sides until recently.
b. Construction of the reactors did not take place; the Bush administration rejected the Clinton policy and South Korean president Kim Dae-jung’s “sunshine policy” towards the North; and at some point North Korea resumed its nuclear weapons program.
c. Bush explicitly repudiated the agreement in his 2002 “State of the Union” speech.
22. In 1997 Kim Dae-jung was elected South Korean president and initiated the “sunshine policy” of rapprochement with North Korea. This led to his meeting in Pyongyang in June 2000 with North Korean leader Kim Jong-il, in which both leaders agreed to seek reunification without foreign interference. When Kim met with President Bush the following year in Washington, Bush
a. declined to support the “sunshine policy” and demanded that North Korea provide more verification of the suspension of its missile program, and withdraw conventional artillery and armor from the border with South Korea
b. enthusiastically supported Kim’s policy and the 1994 Agreed Framework
c. offered proof to journalists that North Korea was not complying with the 1994 agreement
23. South Korea has been counted among the “Four Tigers” because of its strong economic growth since the 1970s. But in 1997 the won lost half its value and the economy collapsed. Unemployment rose from 2 to 7 percent. Thereafter, the economy has rebounded due to:
a. an IMF agreement raising the percentage of a Korean company’s stock that could be owned by foreigners from 26 to 50 percent, insuring greater foreign control over the economy
b. a $ 55 billion loan package
c. both of the above
24. In his State of the Union address (January 29, 2002) President Bush referred to North Korea as
a. a “rogue state”
b. part of an “axis of evil”
c. an “evil empire”
25. What percentage of South Koreans polled after Bush’s speech disagreed with his characterization of North Korea?
26. Which, among the following, has most benefited from the acquisition of North Korean missile technology?
27. Currently deployed North Korean missiles night possibly reach what part of U.S. territory?
c. The Aleutian Islands
28. How many U.S. troops are currently stationed in South Korea?
a. about 16,000
b. about 22,000
c. about 37,000
29. How many foreign troops are stationed in North Korea?
30. According to official South Korean government figures, how many U.S. soldiers in South Korea between 1967 and 1998 committed “overt criminal offenses”?
a. over 40,000
b. over 20,000
c. over 10,000
31. How many “registered” prostitutes service U.S. GIs in South Korea?
a. about 12,000
b. about 18,000
c. none; there is no registration process
32. U.S. arms sales to South Korea during the Clinton administration were in excess of
a. $ 10 billion
b. $ 6 billion
c. $ 2 billion
33. There is some evidence that North Korea may possess one or two nuclear weapons. What nation is known to have deployed about 100 tactical nuclear weapons on the Korean peninsula between 1958 and 1991?
a. South Korea
Current South Korean public opinion polls indicate that the foreign country people most fear is
a. the U.S.
b. North Korea
Answers: 1 (a); 2 (c); 3 (b); 4 (b); 5 (b); 6 (a); 7 (b); 8 (a); 9 (b); 10 (c); 11 (c); 12 (b); 13 (a): 14 (c); 15 (b); 16 (a); 17 (a); 18 (c); 19 (a); 20 (a); 21 (b); 22 (a); 23 (a); 24 (b); 25 (a); 26 (c); 27 (c); 28 (c); 29 (a); 30 (a); 31 (b); 32 (a); 33 (c); bonus (a)
GARY LEUPP is an an associate professor, Department of History, Tufts University and coordinator, Asian Studies Program. He can be reached at: email@example.com |
22.3 explain how the relationship between Earth and Sun is critical to the study of geography Our solar systemThe sun is at the center of our solar system. It exerts a strong force of gravity that keeps Earth and all the other objects in the solar system revolving around it
6The Planets: Neighbors in Space The largest objects that orbit the sun are called planets.At least nine planets orbit our sun.Some of the planets have one or more moons.Mercury, Venus, Earth, and Mars are terrestrial planets because they have solid rocky crusts.
7Farther from the sun are the gas giant planets–Jupiter, Saturn, Uranus, and Neptune. They are much more gaseous and less dense than the terrestrial planets.Pluto, the exception among the planets, is a ball of ice and rock.
8Asteroids, Comets, and Meteoroids 2.1 explain internal and external physical forces that impact EarthAsteroids, Comets, and MeteoroidsSmaller objects in the solar system include asteroids, comets, and meteoroids.Asteroids are small, irregularly shaped, planet-like objects.Comets are made of icy dust particles and frozen gases. Meteoroids are pieces of space debris–chunks of rock and iron.
11Getting to Know EarthEarth is the largest of the inner planets. Its diameter at the Equator is larger than the diameter from pole to pole.Water, Land, and AirThe surface of the earth is about 30 percent land and about 70 percent water.The atmosphere is about 78 percent nitrogen, about 21 percent oxygen, and about 1 percent other gases, such as argon.
12LandformsThe earth’s landforms–physical features of particular shape and elevations–include continents, mountains, hills, plateaus, valleys, and plains.The part of a continent that extends underwater is called a continental shelf.
13Earth’s Heights and Depths The highest point on Earth is the summit of Mount Everest at 29,035 feet (8,852 m) above sea level.Earth’s lowest point of dry land is on the shore of the Dead Sea at 1,349 feet (411 m) below sea level.The deepest known level of the ocean floor is the Mariana Trench at 35,827 feet (10,923 m) below sea level. |
Language educationLanguage education
is the teaching and learning of a language or languages.
There are several methods in wide use:
- Immersive places students in a situation where they must use a foreign language, whether or not they know it. This creates fluency, but not precision, accuracy of usage or beauty.
- Tutoring by a native speaker is one of the best all-around methods. However it requires a motivated native tutor, which can be a rare, expensive commodity.
- 'Directed practice'\ has students repeat phrases. This method is used by U.S. diplomatic courses. It can quickly provide a "phrasebook" knowledge of the language. Within these limits, the students' usage is accurate and precise. However the student's choice of what to say is not flexible.
- Absorptive has students listen to or view video tapes of language models acting in situations. Most instructors now acknowledge that this method is ineffective by itself.
- Grammatic instructs students in grammar, and provides vocabulary to memorize. Most instructors now acknowledge that this method is ineffective by itself.
- Eclectic methods combine the above into a single course of study. These seem the best; at least, an eclective method is recommended by Barry Farber, a major polygot (25 languages) who formed the famous New York Language Club.
Mr. Farber advocates that a student follow several paths at once. In brief the method he recommends is to: 1) study the first few chapters of a grammatical textbook. Then 2) begin understanding a real text. He also advocates use of a 3) phrasebook, 4) audio aids to pronunciation, and 5) a written transcript for role-playing. Farber says that after years of study, the best
way to learn vocabulary is to make up memorable stories about each word.
The study or learning of English in an environment where English is already the predominant language, such as in an English speaking country, by someone whose first language is not English.
The study or learning of English in an environment where English is not already the predominant language, such as in a non English speaking country, by someone whose first language is not English.
- EFL - English as a foreign language
The study or learning of English in an environment where English is the predominant language, by someone whose first language is not English.
- ESL - English as a second language
The teaching of English in an environment where English is not already the predominant language, such as in a non English speaking country, to someone whose first language is not English.
- TELL - Technology-enhanced language learning
- TEFL - Teaching English as a foreign Language
The teaching of English in an environment where English is the predominant language, to someone whose first language is not English.
- TESL - Teaching English as a second Language
This acronym might be a substitute for TESL more than for TEFL. It is sometimes preferred over TESL because English can be a third, fourth or fifth, etc. language to a student.
- TESOL - Teaching English to Speakers of Other Languages (or) Teaching English as a Second or Other Language
- ELT - English Language teaching
- TOEFL - Test of English as Foreign Language
- TOEIC - Test of English for International Communication
- TPR - Total Physical Response
- English Forum is a popular web portal with extensive resources for students and teachers of English (ESL/EFL). Interactive Exercises, Message Boards, ELT Book Catalog, Good School Guide, Web Directory, World News, Links, Cool Tools.
- http://www.optimnem.co.uk provides online language courses which teach spatial learning strategies for English, French and German. Tutor-supported.
- The Two Hands Approach to the English Language is a new approach to the teaching of English that draws on Oriental philosophy, general systems theory, and a student-friendly mix of expertise from other recent or distantly past methodologies.
- Serving Linguistically and Culturally Diverse Students: Strategies for the School Librarian covers ways school librarians can help with ESL students.
- 1-language.com - Learn English Online Innovative ESL site with a host of free resources for teachers and learners of English as a Second Language. |
The art and architecture of ancient Rome, the foundation of which is traditionally given as 753 bc. From then until 509 bc it was ruled by Latin and Etruscan kings, but a crisis of monarchy led to the establishment of a Republic, which lasted from 509 to 27 bc. After the successful Punic Wars against Carthage in the 3rd and 2nd centuries bc, Rome became a world power and its culture was rapidly hellenized after the conquest of Greece in 146 bc. Civil wars caused the Republic to collapse and it was replaced by the Empire, distinguished by the magnificent and ambitious building programmes instigated by many of its emperors. The spread of Christianity undermined the structure of the Empire, especially after it was recognized as the official religion of the Empire under the Edict of Milan in 313 ad during the reign of Constantine the Great. With the foundation of Constantinople in 324 ad the Empire was effectively split into eastern and western halves with Rome becoming the centre of the Roman church and Constantinople the capital of the Empire and centre of Byzantine art. |
From May 1806 until June 1807, Baron Alexander von Humboldt and a colleague observed local Berlin magnetic declination every half hour from midnight to morning. They used a microscope to identify which direction the magnetic needle was pointing. On December 21, 1806, strong magnetic disturbances were recorded. Humboldt noted that this magnetic disturbance was accompanied by strong auroral lights. In the morning, the aurora was gone, the magnetic disturbances were gone. Humboldt was left though with his discovery of the geomagnetic storm.
A geomagnetic storm is just what Humboldt recorded, a marked temporary disturbance of the Earth's magnetic field. It was initially thought that geomagnetic storms were produced by the influx of a greater than normal amount of solar particles released from the Sun during a flare or CME (coronal mass ejection). Solar flares and CMEs are related to geomagnetic storms, but not because of the increase influx of particles into the Earth's magnetosphere.
The solar wind carries with it the magnetic field of the Sun. This magnetic field or the IMF (interplanetary magnetic field) has a particular orientation - southward or northward. If the IMF of the solar wind is southward and the solar wind crosses the Earth for long durations of time or in shorter more energetic bursts (flares/CMEs), geomagnetic storms can be expected. Geomagnetic storms are complex multi-faceted phenomena that originate at the Sun and occur in the solar wind, the magnetosphere, the ionosphere and the thermosphere. Basically, the southward IMF causes magnetic reconnection of the dayside magnetopause, rapidly injecting magnetic and particle energy into the Earth's magnetosphere and modifying the large-scale ring current systems.
Just in the last 30 years have scientists truly begun to understand the coupled Sun-Earth system. Much of the new development and many of the improved theories are due to space-based observatories such as Yohkoh and Ulysses. Considerable uncertainties still exist in regards to geomagnetic storms. It is extremely important to understand such storms because of the effects they have on life on Earth. Geomagnetic storms can affect radio communication, satellite drag, auroral activity and even the safety of astronauts in Earth orbit. |
Gender is a social and legal status that classifies people as girls and boys, women and men. However, gender is often misunderstood as being absolutely related to the individual's sexual reproductive organs (i.e., penis and vagina). The terms "male" and "female" are sex categories, while "masculine" and "feminine" are gender categories.¹
Sex describes one’s biological aspects, while gender describes a socially constructed role that is either masculine or feminine. These constructions often intersect with sexuality, as certain character traits are attributed to certain sexualities (i.e., a man displaying typically female behaviours may be stereotyped as homosexual).
Gender as a social construction
Common socially constructed understandings of gender include:
- Masculinity: a set of qualities, characteristics, or roles considered typical of a man. Some traits usually considered masculine include courage, aggressiveness, and self-confident.
- Femininity: a set of qualities, characteristics, or roles considered typical of a woman. Some traits usually considered feminine include gentleness, empathy, and sensitivity.
Both men and women can have both masculine and feminine traits. An example may be a male child who likes to play outdoors (i.e., a masculine trait) but also likes to wear dresses (i.e., a feminine trait). Although the boy likes to wear dresses, he is still considered a boy. Gender, therefore, is not absolute. This can often be a cause of controversy for those who identify as transgender, as these individuals display behaviours typically attributed to the sex they were not born with.
Different cultures view gender in different ways. As an example, an undefined Muslim culture may view gender as strict, always following the same rigid pattern of masculine traits held by men and feminine traits held by women. Conversely, a Scandinavian culture may view gender as crossing the paths of men and women, with feminine and masculine traits being held by both men and women.
Things considered masculine and feminine change over time. A man who took great care of their outside appearance, known as a dandy, was once considered masculine in Victorian times. If we look at this trait today, sometimes known as a metrosexual, it has adopted a feminine definition. Since the definition of these traits change over time, they are socially constructed.
Two-spirit people is a relatively new umbrella term referring to gender-variant individuals by indigenous North Americans. Intersex, androgynous people (i.e., feminine males and masculine females) have been historically held in high respect by Native Americans. In the past, feminine males were sometimes referred to as "berdache" (adapted from the Persian word "bardaj" meaning intimate male friend). Androgynous males were commonly married to a masculine man, or had sex with men, and the masculine females had feminine women as wives, the term berdache had a homosexual connotation.²
Whereas a transgender person in Western society is typically misunderstood as abnormal, Native American culture understands this as a blessing. Native American culture tends to see a person's basic character as a reflection of their spirit, meaning two-spirited individuals are doubly blessed with having both the spirit of a man and woman.²
Image Credit (1): Libcom.org
Image Credit (2): Flickr
2. Walter L Williams. The 'Two-Spirit' Peple of Indigenous North Americans. Retrieved on May 1, 2014, from http://www.firstpeople.us/articles/the-two-spirit-people-of-indigenous-north-americans.html |
A chromophore is the part of a molecule responsible for its color. The color that is seen by our eyes is the one not absorbed within a certain wavelength spectrum of visible light. The chromophore is a region in the molecule where the energy difference between two separate molecular orbitals falls within the range of the visible spectrum. Visible light that hits the chromophore can thus be absorbed by exciting an electron from its ground state into an excited state.
In the conjugated chromophores, the electrons jump between energy levels that are extended pi orbitals, created by a series of alternating single and double bonds, often in aromatic systems. Common examples include retinal (used in the eye to detect light), various food colorings, fabric dyes (azo compounds), pH indicators, lycopene, β-carotene, and anthocyanins. Various factors in a chromophore's structure go into determining at what wavelength region in a spectrum the chromophore will absorb. Lengthening or extending a conjugated system with more unsaturated (multiple) bonds in a molecule will tend to shift absorption to longer wavelengths. Woodward-Fieser rules can be used to approximate ultraviolet-visible maximum absorption wavelength in organic compounds with conjugated pi-bond systems.
Some of these are metal complex chromophores, which contain a metal in a coordination complex with ligands. Examples are chlorophyll, which is used by plants for photosynthesis and hemoglobin, the oxygen transporter in the blood of vertebrate animals. In these two examples, a metal is complexed at the center of a tetrapyrrole macrocycle ring: the metal being iron in the heme group (iron in a porphyrin ring) of hemoglobin, or magnesium complexed in a chlorin-type ring in the case of chlorophyll. The highly conjugated pi-bonding system of the macrocycle ring absorbs visible light. The nature of the central metal can also influence the absorption spectrum of the metal-macrocycle complex or properties such as excited state lifetime. The tetrapyrrole moiety in organic compounds which is not macrocyclic but still has a conjugated pi-bond system still acts as a chromophore. Examples of such compounds include bilirubin and urobilin, which exhibit a yellow color.
An auxochrome is a functional group of atoms attached to the chromophore which modifies the ability of the chromophore to absorb light, altering the wavelength or intensity of the absorption.
Halochromism occurs when a substance changes color as the pH changes. This is a property of pH indicators, whose molecular structure changes upon certain changes in the surrounding pH. This change in structure affects a chromophore in the pH indicator molecule. For example, phenolphthalein is a pH indicator whose structure changes as pH changes as shown in the following table:
|Conditions||acidic or near-neutral||basic|
|Color name||colorless||pink to fuchsia|
In a pH range of about 0-8, the molecule has three aromatic rings all bonded to a tetrahedral sp3 hybridized carbon atom in the middle which does not make the π-bonding in the aromatic rings conjugate. Because of their limited extent, the aromatic rings only absorb light in the ultraviolet region, and so the compound appears colorless in the 0-8 pH range. However, as the pH increases beyond 8.2, that central carbon becomes part of a double bond becoming sp2 hybridized and leaving a p orbital to overlap with the π-bonding in the rings. This makes the three rings conjugate together to form an extended chromophore absorbing longer wavelength visible light to show a fuchsia color. At pH ranges outside 0-12, other molecular structure changes result in other color changes; see Phenolphthalein for details.
None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.
All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.
The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely. |
Notation, the way of writing down music, has developed over many years.
- Many types of early music, just like stories, were passed down
the generations without being notated, hence they tended to evolve
over time. Notation is required for consistency and precision.
- Notation clearly begun and developed in parallel with music theory,
because you cannot record what notes are being used if you have no
names for the notes, or way of identifying what relationships are
between the notes.
- Hence, as the concepts of scales and
keys began to take shape,
so notes started to be named.
The Greeks and Romans both had non-graphical notations which used letters
of their alphabets to symbolise notes. From this came our use of the letters
A to G to represent notes which is still common in many countries.
- The letter names are sometimes called the "Boethian notation" after
Boethius, a Roman writer and statesman who lived in the 5th century.
He was in the service of Emperor Theodoric, was accused of treason and
executed in the year 524 A.D. He was the first to document the use of
letters as names for notes.
- An alternative method of note naming was introduced about 1000 A.D.
by the monk Guido d'Arezzo. This has survived up to today as
tonic sol fa. The most important aspect
of this development however, is that it used six of the notes which
we use in the major scale today.
- France, Italy and other associated countries now tend to use the
tonic sol fa names (based on C as Doh)
as names of notes, rather than alphabetical letters, but this change
has (I believe) only happened in the last two hundred years.
Early systems of notation which used letters of the alphabet were
the origin of some of the symbols used nowadays
- In early times, B flat was a different note, and a rounded, lower-case "B"
was used to represent it. From this comes our use of a
for a flat sign.
- A squarer, gothic, lower-case "B" was used for B natural, and from this
comes our natural sign:
- Our sharp sign comes from this gothic B with a line through it:
Modern notation is much more precise than older notation.
- When I was a boy in an Anglican choir,
we used a hymn book which had some hymn tunes in old plainsong notation.
(I still have a copy in the cupboard in fact - the English Hymnal of 1933).
- This plainsong notation uses a four line stave instead of five,
no time signatures or key signatures, and has some diamond-shaped notes.
- This notation, compared to modern
notation, is quite imprecise in its specification of how the music should
- However, this was probably good enough for the style of music it was
- This is also true of even older notations: it may seem sparse to
us, but it was appropriate and sufficient for the type of music
Graphic forms of notation are first known from the seventh century
- The earliest forms of graphical notation were probably just marks
indicating approximate pitch to remind readers of a tune they had already
These would have been used by strolling minstrels and monks in monasteries.
- This evolved in church music into plainsong
- Plainsong was at first very imprecise, without clefs or staves
- The modern system for notes was developed initially in the fourteenth
The features of modern notation are there in order to notate music
that we know today.
- Over the years, many experiments have added new signs, new methods and
- Those that proved useful for the music of the day have stuck
- Those that were complicated, cumbersome, or not useful have mostly
- Unfortunately, some old music uses obsolete signs, and in some cases
it is not even clear what they mean
Modern notation developed in Europe and spread to the rest of the world.
- This makes music notation one of the most widely recognised
international languages of all time |
Building A Flashlight - Performance Assessment Day
Lesson 7 of 7
Objective: Students will build a flashlight from previously prepared components.
RAP - Review and Preview
I call students to the gathering area. We go over the components we prepared yesterday. I tell students that today they are going to write a “How To” paragraph on how they think the flashlight will go together. When they have completed the “How To” paragraph, they will check it with me and then will begin to build from their instructions. I remind them to look at their components and remember to use all parts in their instructions.
Students sit and write their “How To” paragraphs independently (see example in the resources section, written on GoogleDocs by one of my students). They have all their components out in front of them and work through the steps of how to make their flashlight. Students check in with me and begin to build their flashlights when their instructions are complete.
I circulate among students and observe the number of prompts they need during their writing and during their building. This will help me assess the level of understanding students have.
Students bring their flashlights to me to show me how they work. They need to be able to turn them on and off, and explain how it works.
Students show each other how their flashlights work. Although there is a standard way to make the switch, in the instructions, I accept any functional switch. This seems to be the largest source of variance and creativity in this project. It shows how students problem-solve and provides interesting insight into the way some of my student’s minds work. |
The Great Depression
The Great Depression was the most severe economic depression ever experienced by the Western world.
It was during this troubled time that the world’s most famous case of deflation also happened. The resulting aftermath was so bad that economic policy since has been chiefly designed to prevent deflation at all costs.
SETTING THE STAGE
The transition from wartime to peacetime created a bumpy economic road after World War I.
Growth has hard to come by in the first years after the war, and by 1920-21 the economy fell into a brief deflationary depression. Prices dropped -18%, and unemployment jumped up to 11.7% in 1921.
However, the troubles wouldn’t last. During the “Roaring Twenties”, economic growth picked up as the new technologies like the automobile, household appliances, and other mass-produced products led to a vibrant consumer culture and growth in the economy.
More than half of the automobiles in the nation were sold on credit by the end of the 1920s. Consumer debt more than doubled during the decade.
While GDP growth during this period was extremely strong, the Roaring Twenties also had a dark side. Income inequality during this era was the highest in American history. By 1929, the income of the top 1% had increased by 75%. Income for the rest of people (99%) increased by only 9%.
The Roaring Twenties ended with a bang. On Black Thursday (Oct 24, 1929), the Dow Jones Industrial Average plunged 11% at the open in very heavy volume, precipitating the Wall Street crash of 1929 and the subsequent Great Depression of the 1930s.
THE CAUSE OF THE GREAT DEPRESSION
Economists continue to debate to this day on the cause of the Great Depression. Here’s perspectives from three different economic schools:
John Maynard Keynes saw the causes of the Great Depression hinge upon a lack of aggregate demand. This later became the subject of his most influential work, The General Theory of Employment, Interest, and Money, which was published in 1936.
Keynes argued that the solution was to stimulate the economy through some combination of two approaches:
1. A reduction in interest rates (monetary policy), and
2. Government investment in infrastructure (fiscal policy).
“The difficulty lies not so much in developing new ideas as in escaping from old ones.” – John Maynard Keynes
Monetarists such as Milton Friedman viewed the cause of the Great Depression as a fall in the money supply.
Friedman and Schwartz argue that people wanted to hold more money than the Federal Reserve was supplying. As a result, people hoarded money by consuming less. This caused a contraction in employment and production since prices were not flexible enough to immediately fall.
“The Great Depression, like most other periods of severe unemployment, was produced by government mismanagement rather than by any inherent instability of the private economy.” ― Milton Friedman
Austrian economists argue that the Great Depression was the inevitable outcome of the monetary policies of the Federal Reserve during the 1920s.
In their opinion, the central bank’s policy was an “easy credit policy” which led to an unsustainable credit-driven boom.
“Any increase in the relative size of government in the economy, therefore, shifts the societal consumption-investment ratio in favor of consumption, and prolongs thedepression.” – Murray Rothbard
THE GREAT DEPRESSION AND DEFLATION
Between 1929 and 1932, worldwide GDP fell by an estimated 15%.
Personal income, tax revenue, profits and prices plunged. International trade fell by more than 50%. Unemployment in the U.S. rose to 25% and in some countries rose as high as 33%.
Life and Times During the Great Depression
The economy of the United States was destroyed almost overnight.
More than 5,000 banks collapsed, and there were 12 million people out of work in America as factories, banks, and other shops closed.
Regardless of the causes, the combination of deflationary pressures and a collapsing economy created one of the most desperate and miserable eras of American history. The resulting aftermath was so bad, that almost every future Central Bank policy would be designed primarily to combat such deflation.
The Deflationary Spiral
After the stock crash, money and consumer confidence was hard to find. Instead of spending money on new things, people hoarded their cash.
Fewer dollars spent meant more drops in demand and prices, which led to defaults, bankruptcies, and layoffs.
As a result of this spiral, the prices for many food items in the U.S. fell by nearly 50% from their pre-WW1 levels.
The price of butter went from pre-crisis levels of $0.21 to $0.13 per pound in 1932. Wool had a drop from $0.24 to $0.10 per pound, and most other goods followed the same price trajectory.
Here’s how “real value” is affected in a deflationary environment:
Real value increases: cash is king and gains in real value.
Assets (stocks, real estate)
Real value decreases as prices fall.
Debtors owe more in real terms
Real interest rates (nominal rates minus inflation) can rise as inflation is negative, causing unwanted tightening.
From Bad to Worse
The Great Depression lasted from 1929 to 1939, which was unprecedented in length for modern history.
To this day, economists disagree on why the Depression lasted so long. Here’s some of their explanations:
The New Deal was not enough
Looking back on The Great Depression, John Maynard Keynes believed that monetary policy could only go so far.
The Central Bank could not ultimately push banks to lend, and therefore demand had to be created through fiscal policy. Keynes advocated massive deficit spending to offset markets’ failure to recover.
Keynesians such as Paul Krugman believe that Franklin D. Roosevelt’s economic policies through The New Deal were too cautious.
“You can’t push on a string.” – Keynes
The New Deal made things worse
Some economists believe the New Deal had a negative net effect on the recovery.
The National Recovery Administration (NRA) is a primary subject of this criticism. Established in 1933, the goal of the NRA was to lift wages. To do this, it got industry leaders to meet and establish minimum prices and wages for workers.
Cole and Ohanian claim that this essentially created cartels that destroyed economic competition. They calculate that this, along with the aftermath of these policies, accounted for 60% of the weak recovery.
Lastly, one other charge leveled at Roosevelt by his critics is that the sprawling policies from the New Deal ultimately created uncertainty for business leaders, leading to less investment. This lengthened the recovery.
“[The] abandonment of [Roosevelt’s] policies coincided with the strong economic recovery of the 1940s.” – Cole and Ohanian
The Federal Reserve didn’t do enough
Milton Friedman claimed that the Federal Reserve made the wrong policy decision, which extended the length of the Depression.
Between 1929 and 1933, the monetary supply dipped 27%, which decreased aggregate demand and then prices. The Fed’s failure was in not realizing what was happening and not taking corrective action.
“The contraction is…a tragic testimonial to the importance of monetary forces…[D]ifferent and feasible actions by the monetary authorities could have prevented the decline in the stock of money… [This] would have reduced the contraction’s severity and almost as certainly its duration.” – Milton Friedman (and co-author Anna Schwartz)
The Federal Reserve shouldn’t have done anything
Austrian economists believe that the Fed and government both made policy choices that slowed the recovery.
For starters, most agree with Friedman that the Fed’s policy choices at the start of the Depression led to deflation.
They also point to the premature tightening that occurred in 1936 and 1937 as a policy failure. During those two years, the Fed not only hiked interest rates, but it also doubled bank-reserve requirements. These policies coincided with Roosevelt’s tax hikes, and a recession occurred within the Depression from 1937 to 1938.
Critics of these policies say that this delayed the recovery by years.
“I agree with Milton Friedman that once the Crash had occurred, the Federal Reserve System pursued a silly deflationary policy. I am not only against inflation but I am also against deflation. So, once again, a badly programmed monetary policy prolonged the depression.” – Friedrich Hayek
Personal Stories from The Great Depression
“One evening when we went down to check on the bank, there were hundreds of people out front yelling and crying and fighting and beating on the locked doors and windows. They had fires built-in the street to keep warm and there were people milling around all over the downtown.” – Vane Scott, Ohio
“A friend I worked with said in the Depression he rode the rails and stopped to eat vegetables out of a garden. The owner said he would shoot him if he didn’t stop. My friend said ‘go ahead,’ as he was that hungry. ” – James Randolph, Ohio
“When neighbors couldn’t get a loan from the bank, they’d come to Dad. He sold farm machinery. He never put his money in a bank. He stored it in a strongbox in the fruit cellar, under the apples. He’d loan the neighbors what they needed and they paid him back when they could. If there was a month—especially the winter months—when they couldn’t pay, they’d slaughter a cow or a pig and give him a portion. In the summer it was vegetables: corn, peas, whatever they had growing.” – Gladys Hoffman, New York
“I thought the Depression was going to go on forever. For six or seven years, it didn’t look as though things were getting better. The people in Washington DC said they were, but ask the man on the road? He was hungry and his clothes were ragged and he didn’t have a job. He didn’t think things were picking up.” – Arvel “Sunshine” Pearson, Arkansas
After the 1937-38 Recession, the United States economy began to recover.
The focus of the American public would eventually shift away from the Great Depression, as events in Europe unfolded after Germany’s invasion of Poland in 1939. |
One species sings a special song to their eggs to warn their offspring if the weather outside is getting too hot.
On hearing the song, baby zebra finches hatch sooner and gain less weight - meaning they're smaller and better able to keep themselves cool.
Ducks and chickens also 'speak' to their eggs, meaning that when they hatch they already recognise their parents by their voices.
They are well-developed when they are born, but scientists did not realise that birds which hatch at an early stage - like zebra finches - could also hear through their shells.
Mylene Mariette from Australia's Deakin University said: "We didn't realise that they were able to hear before hatching."
Scientists spotted the behaviour among captive zebra finches in Australia - while they often call to one another, they noticed that sometimes the birds sang a specific song when alone with their eggs.
This occurred when the weather was warmer, and it was found that it caused the birds to hatch sooner.
The benefit of this appears to be the fact that it's easier for smaller birds to cool themselves in the hot weather.
Sonia Kleindorfer at Flinders University in Adelaide told the New Scientist magazine: "This is a remarkable study.
"Zebra finches are literally right in front of us in cages around the world. What are we ignoring that is right before us?"
It is unclear whether wild zebra finches also exhibit the same behaviour as captive ones. |
Celebrating Freedom of Speech and Freedom of the Press
Today’s discussion about Freedom of Speech is a topic on the May 10-Minute Teacher calendar. Download a copy from the link below, hang it on your fridge, and use it to initiate a conversation with your kids anytime.
Today, people are always expressing themselves. Sometimes we do it in a blog post, other times we share our thoughts on social media outlets like Facebook and Twitter. We also talk to friends, family, and strangers about our religious beliefs, our political views, and everything in between. Freedom of speech is one of the most basic elements that allow us to continue in a democratic state. We have the right to talk about our choices and share our opinions with others, and to oppose things that we feel are detrimental to our liberties. With this freedom, we can speak out when we feel there is a wrong and ask others to help us make things right.
The first amendment was included in the Bill of Rights, a document that was created to address important elements that many felt were missing form the new Constitution and was implemented in 1791.
The First Amendment to the Constitution includes Freedom of Speech and Freedom of the Press. Here is a brief description from Scholastic.com:
“Freedom of Speech. This freedom entitles American citizens to say what they think, provided they do not intentionally hurt someone else’s reputation by making false accusations. Neither may they make irresponsible statements deliberately harmful to others, such as yelling, “Fire!” in a crowded theater when there is no fire. There are many issues about which Americans disagree, from child-rearing practices to baseball teams to Presidential candidates. Freedom of speech enables people to state their opinions openly to try to convince others to change their minds.
The First Amendment also gives you the right to disagree with what others say without fear of punishment by the government authorities. However, if you make an outrageous statement, such as, “The earth is flat,” free speech will not keep people from making fun of you. If you express an unpopular opinion — for example, that students do not get enough homework — don’t be surprised if your classmates avoid you. The First Amendment does not prevent social or peer pressure to conform to what others think.
Freedom of the Press. This freedom makes it possible for Americans to keep informed about what is going on in government. It helps them to be responsible citizens. Reporters and editors can criticize the government without the risk of punishment, provided they do not deliberately tell lies. Newspapers, magazines, and books, as well as television and movie scripts, do not have to be submitted for government inspection before they are published. This censorship would violate the First Amendment.” (Source: Scholastic.com)
Questions for conversation:
1. What do you think life would be like if we did not have freedom of speech?
2. What kind of news do you think we would see in the newspaper if there were no freedom of the press? Do you think there would be a newspaper?
3. What kind of speech or communication is not protected by freedom of speech? |
In these three tasks, you are going to use a brick being pulled along a surface covered in sandpaper to model the behaviour of an earthquake:
Turning the pulley to build up tension in the string is like the build up of stresses at a fault, and the brick movement over the sandpaper is like the slippage that happens in an earthquake.
You will compare your results with real earthquake data and evaluate the brick and sandpaper as an earthquake model.
Task A: Looking at the build-up of forces
Make sure the pulley is clamped onto the plank and attach the string from the pulley onto the force meter.
Use another string attached to the force meter to tie around the brick.
Stick or tape a ruler with a millimetre scale onto the plank and a pointer onto the brick: you should be able to measure the position of the brick against the scale to the nearest mm.
Start the brick at the end of the sandpaper furthest away from the pulley: make sure it is completely on the sandpaper.
Wear eye protection. Turn the pulley so it gradually increases the tension in the string (and the force on the brick) until the brick starts to move. Increase the tension slowly so that it takes several seconds before the brick slips.
Try this a few times. Watch what happens to the force meter reading: does the brick always begin to slip when the force reaches the same value?
Task B: Looking at the amount of slip
Pointer position (mm)
0 (Start position)
Place the brick back at the start position. This time, measure how far the brick moves each time it slips. Record your results in a table like the one shown here.
Continue until you have at least thirty readings.
Plot a histogram to show the frequency for each size of slippage.
What do you notice about the relative frequency of large slippages?
Compare your histogram with a histogram showing the frequency of different magnitudes of earthquake.
Task C: Patterns, predictions and models
If you have access to a data logging kit with a force sensor, you could investigate some or all of the following questions:
Does the force return to a particular value after every slip?
Do bigger forces lead to bigger slippage?
Is there a critical level of force which triggers slippage?
Is there a relationship between the force drop and the size of slippage?
What patterns does your earthquake model show? Can we use it to predict when an 'earthquake' will happen or how big it will be?
Compare your results to standard earthquake models. |
From CSIRO Science Mail
Some ants use magnetic sensors to find their way.
Tiny magnetic sensors have been found in the antennae of ants. Ants may use these magnetic sensors to find their way from one place to another, similar to an in-built compass.
The ant being studied is a species called Pachycondyla marginata, which is found in the rainforests of South America. These ants migrate, moving from place to place depending on the season. This particular ant species migrates in a direction 13 degrees from the north-south axis of the Earth on average.
“Behavioral experiments suggest that ants can use the Earth’s magnetic field and the Pachycondyla marginata ants seem to take into account such information for migration,” says Jandira de Oliveira, a PhD student working on the study.
Jandira travelled from Brazil to Germany with the ants to work with researchers specialising in electron microscopy. They used beams of electrons on ultra-thin samples of the ants to observe the magnetic sensors.
The scientists found the magnetic sensors to be nano-sized iron oxide particles in the antennae, particularly next to an area called Johnson’s organ. Johnson’s organ is a bit of a mystery to scientists, but they have already discovered links between the organ and gravity and sound perception.
It seems that the magnetic particles are not produced by the ants in a biological process. Instead, it is likely that the magnetic particles come from dirt. “The ants we studied dwell in tropical soils that are full of very fine-grained iron minerals, so there is plenty of material available,” says Jandira.
The magnetic sensors in the antennae work by detecting the Earth’s magnetic field. Then, the sensors send the information via a signal from the nervous system to the brain.
It is important to note that not all ants navigate in the same way. For example, desert ants have evolved eyes that use sunlight patterns to navigate. “There are many different ant species, each one adapted to their habitat,” says Jandira. |
Question 1) Due to writing and communicating with writing we have changed the way we think critically. Only because of writing Plato, for example, was able to convey his critique.
Question 2) Writing without the eventual need for print forced early writers to write with the idea of how their words would sound if read out loud.
Question 3) Writing changes our relationship with time because we obtain our ability to keep accurate records and we become forced to view our lives in relationship to time.
Question 4) We become more aware of time, place, people, languages, structure of writing, composition.
Question 5) The way in which writing relates to rhetoric and learned Latin is that the only people who wrote were educated males that were taught in Latin which had its sole base academia.
Question 6) The novel and literature is not an effect of rhetoric but is an effect of vernacular languages because of the popularity of the literary style of females authors who did not receive formal rhetoric training. (109)
Question 7) Poetry is a subset of rhetoric because
Question 8) Rhetoric is male due to the fact that men were formally in rhetoric |
The size and depth of the ozone hole over Antarctica was not remarkable in 2016. As expected, ozone levels have stabilized, but full recovery is still decades away. What is remarkable is that the same international agreement that successfully put the ozone layer on the road to recovery is now being used to address climate change.
The stratospheric ozone layer protects life on Earth by absorbing ultraviolet light, which damages DNA in plants and animals (including humans) and leads to health issues like skin cancer. Prior to 1979, scientists had never observed ozone concentrations below 220 Dobson Units. But in the early 1980s, through a combination of ground-based and satellite measurements, scientists began to realize that Earth’s natural sunscreen was thinning dramatically over the South Pole. This large, thin spot in the ozone layer each southern spring came to be known as the ozone hole.
The first image shows the Antarctic ozone hole on October 1, 2016, as observed by the Ozone Monitoring Instrument (OMI) on NASA’s Aura satellite. On that day, the ozone layer reached its average annual minimum concentration, which measured 114 Dobson Units. For comparison, the ozone layer in 2015 reached a minimum of 101 Dobson Units. During the 1960s, long before the Antarctic ozone hole occurred, average ozone concentrations above the South Pole ranged from 260 to 320 Dobson Units.
The area of the ozone hole in 2016 peaked on September 28, 2016, at about 23 million square kilometers (8.9 million square miles).
“This year we saw an ozone hole that was just below average size,” said Paul Newman, ozone expert and chief scientist for Earth Science at NASA’s Goddard Space Flight Center. “What we’re seeing is consistent with our expectation and our understanding of ozone depletion chemistry and stratospheric weather.”
The second image presents an edge-on (limb) view of Earth’s ozone layer. These data were acquired on October 2 by the Ozone Mapping Profiler Suite (OMPS) during a single orbit of the Suomi-NPP satellite. It reveals the density of ozone at various altitudes, with dark orange areas having more ozone and light orange areas having less. Notice that the word hole isn’t literal; ozone is still present over Antarctica, but it is thinner and less dense in some areas.
In 2014, an assessment by 282 scientists from 36 countries found that the ozone layer is on track for recovery within the next few decades. Ozone-depleting chemicals such as chlorofluorocarbons (CFCs)—which were once used for refrigerants, aerosol spray cans, insulation foam, and fire suppression—were phased out years ago. The existing CFCs in the stratosphere will take many years to decay, but if nations continue to follow the guidelines of the Montreal Protocol, global ozone levels should recover to 1980 levels by 2050 and the ozone hole over Antarctica should recover by 2070.
The replacement of CFCs with hydrofluorocarbons (HFCs) during the past decade has saved the ozone layer but created a new problem for climate change. HFCs are potent greenhouse gases, and their use—particularly in refrigeration and air conditioning—has been quickly increasing around the world. The HFC problem was recently on the agenda at a United Nations meeting in Kigali, Rwanda. On October 15, 2016, a new amendment greatly expanded the Montreal Protocol by targeting HFCs, the so-called “grandchildren” of the Montreal Protocol.
“The Montreal Protocol is written so that we can control ozone-depleting substances and their replacements,” said Newman, who participated in the meeting in Kigali. “This agreement is a huge step forward because it is essentially the first real climate mitigation treaty that has bite to it. It has strict obligations for bringing down HFCs, and is forcing scientists and engineers to look for alternatives.”
NASA Earth Observatory images by Jesse Allen, using Suomi NPP OMPS data provided courtesy of Colin Seftor (SSAI) and Aura OMI data provided courtesy of the Aura OMI science team. Suomi NPP is the result of a partnership between NASA, NOAA and the Department of Defense. Caption by Kathryn Hansen, with contributions from Audrey Haar and Rebecca Lindsey. |
The term murre refers to 2 species, common murre (Uria aalge) and thick-billed murre (U. lomvia), the largest extant members of the auk family, though smaller than the recently extinct great auk. Murres weigh 800-1100 g and are up to 46 cm long. Plumage is dark brown to black above, mostly pure white below. Like all auks, murres come to land only to breed.
Murres occur in cooler waters of the North Pacific and North Atlantic oceans and adjacent parts of the Arctic Ocean. Common murres breed primarily in boreal and low arctic waters; most thick-billed murres breed farther north in low and high arctic waters.
Murres often breed in dense colonies on coastal cliffs and islands, laying a single, large egg on bare rock ledges on the cliff face or surface. They normally first breed when 5 years old. Incubation, shared by both parents, takes 32-34 days. A single chick is fed at a breeding site for 15-20 days and then taken to sea by one parent (usually the male), who remains to guard and feed the chick for about 4-8 weeks.
In Canada, both species are most abundant on the Atlantic coast. Small numbers of common murres breed in BC; thick-bills in the western Arctic. Almost 90% of eastern North American common murres breed in Newfoundland, with about 67% (400 000 to 450 000 pairs) at Funk Island.
Breeding distribution of thick-bills is also restricted; most breed at 11 sites in the eastern Arctic. The thick-billed murre population in eastern Canada totals 1.6 million pairs, representing the entire population in eastern North America and 75% of all thick-bills breeding in the western North Atlantic. Numbers of both species have been seriously reduced over the last century because of human disturbance, hunting, oil pollution and probably commercial fisheries development.
Murres are hunted by residents of Newfoundland and Labrador and by native people. Newfoundland residents were given their hunting right soon after they entered Confederation in 1949. However, until 1994, hunters could kill as many murres as they could access, with daily takes often exceeding 500 birds per hunter. Totals of about 600 000 to 900 000 were shot annually during the 1970s and 1980s, with current levels reduced to 200 000 to 400 000 birds each year. Although regulations now exist, enforcement is difficult. |
Fill in the blanks. Eight-tenths plus twenty-two hundredths equals what hundredths plus twenty-two hundredths equals what hundredths?
This problem is all about adding together two fractions, eight-tenths and twenty-two hundredths. And by working through the calculation from left to right, we can see how to work out the answer, what method we should use. And along the way, there are two missing numbers that we need to fill in. So let’s think about how to add eight-tenths and twenty-two hundredths together. And hopefully, along the way, we’ll consider how to fill in the blanks.
First of all, what do we notice about eight-tenths and twenty-two hundredths? What’s going to make it difficult to add them together at the moment? Well, we know that, in order to be able to add fractions together, they need to have the same denominator. And at the moment, they don’t. We have a fraction in tenths and another fraction in hundredths. So we’re going to have to change one of the fractions so that the denominator is the same as the other.
If we were to convert one hundredths into tenths, we’d need to divide by 10. But as the numerator or the top number in this fraction is 22, we can’t actually divide it by 10 and get a whole number. So we can’t write twenty-two hundredths as a number of tenths. What we can do, however, is to convert eight-tenths into hundredths. So let’s start off by doing that.
To convert a number of tenths into a number of hundredths, we need to multiply the denominator by 10. 10 times 10 equals 100. And for our new fraction to be equivalent to eight-tenths, we need to do the same to the numerator. Eight multiplied by 10 equals 80. So we can say that eight-tenths is the same as eighty hundredths. The fraction still represents the same amount. We’ve converted eight-tenths into eighty hundredths.
And now we can write our addition with two fractions that now have the same denominator: eighty hundredths plus twenty-two hundredths. So we can see that our first missing number, which was this numerator here, is 80.
Now that our fractions have the same denominator, we simply need to add them together. Remember, we don’t to add the denominators together. We just add the numerators. Eighty hundredths plus twenty-two hundredths equals how many hundredths? The answer to this will be our final missing number. We just need to add 80 and 22 together.
Well, to do this quickly in our heads, we can partition 22 into 20 and two. Why would we do this? Well, we know straightaway that 80 plus 20 equals 100. So 80 plus two more than 20 equals two more than 100, or 102. Eighty hundredths plus twenty-two hundredths equals one hundred two hundredths. But is this the right answer? How can we have one hundred two hundredths? The numerator is bigger than the denominator.
Well, we can’t have a numerator larger than 100. It just means that it represents a number that’s more than one. And these sorts of fractions are improper fractions. Here’s what one hundred two hundredths would look like. It’s the same as one and two hundredths.
So in this problem, we wanted to add together a number of tenths and a number of one hundredths. We couldn’t do this. So we knew that we needed to convert the number of tenths into a fraction that was exactly the same but with a denominator of 100. So eight-tenths became eighty hundredths. 80 was our first missing number.
Because our two fractions then have the same denominator, we could add eighty hundredths and twenty-two hundredths quickly together. And the answer was one hundred two hundredths. 102 was our second missing number. The two blanks should be filled in with the numbers 80 and 102. |
Researchers from North Carolina State University and the University of Texas have revealed more about graphene’s mechanical properties and demonstrated a technique to improve the stretchability of graphene – developments that should help engineers and designers come up with new technologies that make use of the material.
Graphene is a promising material that is used in technologies such as transparent, flexible electrodes and nanocomposites. And while engineers think graphene holds promise for additional applications, they must first have a better understanding of its mechanical properties, including how it works with other materials.
“This research tells us how strong the interface is between graphene and a stretchable substrate,” says Dr. Yong Zhu, an associate professor of mechanical and aerospace engineering at NC State and co-author of a paper on the work. “Industry can use that to design new flexible or stretchable electronics and nanocomposites. For example, it tells us how much we can deform the material before the interface between graphene and other materials fails. Our research has also demonstrated a useful approach for making graphene-based, stretchable devices by ‘buckling’ the graphene.”
The researchers looked at how a graphene monolayer – a layer of graphene only one atom thick – interfaces with an elastic substrate. Specifically, they wanted to know how strong the bond is between the two materials because that tells engineers how much strain can be transferred from the substrate to the graphene, which determines how far the graphene can be stretched.
The researchers applied a monolayer of graphene to a polymer substrate, and then stretched the substrate. They used a spectroscopy technique to monitor the strain at various points in the graphene. Strain is a measure of how far a material has stretched.
Initially, the graphene stretched with substrate. However, while the substrate continued to stretch, the graphene eventually began to stretch more slowly and slide on the surface instead. Typically, the edges of the monolayer began to slide first, with the center of the monolayer stretching further than the edges.
“This tells us a lot about the interface properties of the graphene and substrate,” Zhu says. “For the substrate used in this study, polyethylene terephthalate, the edges of the graphene monolayer began sliding after being stretched 0.3 percent of its initial length. But the center continued stretching until the monolayer had been stretched by 1.2 to 1.6 percent.”
The researchers also found that the graphene monolayer buckled when the elastic substrate was returned to its original length. This created ridges in the graphene that made it more stretchable because the material could stretch out and back, like the bellows of an accordion. The technique for creating the buckled material is similar to one developed by Zhu’s lab for creating elastic conductors out of carbon nanotubes.
The paper, “Interfacial Sliding and Buckling of Monolayer Graphene on a Stretchable Substrate,” was published online Aug. 1 in Advanced Functional Materials. Lead author of the paper is Dr. Tao Jiang, a postdoctoral researcher at NC State. The paper was co-authored by Dr. Rui Huang of the University of Texas. The research was funded by the National Science Foundation (NSF) and the NSF’s ASSIST Engineering Research Center at NC State.
Source: North Carolina State University |
Sitting high up in the Andes at an elevation of 3,400 meters is the Inca capital, Cusco, designated as the ‘Historical capital of Peru.” This UNESCO World Heritage Site is the oldest continuously inhabited city in the Americas. Although the Spaniards demolished most of the important buildings of the city, they used the foundations for their colonial architecture. Cusco was rocked by three devastating earthquakes in 1650, 1950 and 1986, but despite the mass destruction of a number of colonial buildings, the Inca foundations have stood strong. Most people use Cusco as a stopover point to head to Machu Picchu and the other Inca centers in the vicinity, but the city with its rich heritage is definitely worth exploring.
Santo Domingo and Coricancha
The Inca ruins of the temple Coricancha (Q’orikancha in Quechua) make up the base of the colonial church and convent of Santo Domingo. Constructed in the mid-15th century, the temple was once the richest temple in the Inca empire. During its heydey the temple and the courtyard were literally covered in gold, hence its name which translates as ‘Golden Courtyard.’ About 700 solid-gold sheets, each weighing about 2kg lined the walls of the temple, and the octagonal font in the middle of the courtyard was also covered with 55kg of sold gold. Besides this, there were also other solid gold treasures which were all lost with the arrival of the conquistadors. The temple was looted and the gold melted down. All that remains of the temple today is the brilliant stonework which forms the base of the church. The 6-meter high curved wall which is visible from the inside and the outside of the church has withstood the violent earthquakes that Cusco has faced.
One of the most significant ruins in the immediate vicinity of Cusco is the massive Sacsayhuaman complex. This marvel of engineering is thought to be the remains of a much larger fortress which was destroyed by the Spaniards. Only 20% of the original structure remains as many of the stones from the walls were used by the Spaniards to build their own houses in the city. Three different areas make up the site, but the most striking is the three-tiered zigzag fortifications. Sacsayhuaman was designed as the head of a puma with the 22 zigzagged walls forming the teeth. The stones here are so massive that one weighs more than 300 tonnes. Engineers are amazed at the precision at which these massive blocks fit so tightly together without the use of mortar.
La Compania de Jesus
This baroque style Jesuit church was built in the 16th century on the foundations of the Inca palace, Huayna Capac, the last ruler of the undivided Incan Empire. It was reconstructed after the earthquake in 1650. When the church was being built it was surrounded by controversy as the Jesuits wanted to make this the grandest church in Cusco. The archbishop of Cusco, however, took offense to this development as he felt it would rival the cathedral. Eventually, on the intervention of Pope Paul III, the cathedral won, but by then it was too late as the La Compania church was near completion with one of the biggest altars in Peru.
Plaza de Armas
The Plaza de Armas has always been at the forefront of Cusco’s life right from the time of the Incas. It was called Huacaypata or Aucaypata by the Incas and was the nerve center of the capital. Two flags are flown here – the Peruvian flag and the rainbow-colored Tahuantinsyon flag which represents the four quarters of the Inca empire. The plaza is flanked by some of the grandest historic buildings of the city including the cathedral and the La Compania de Jesus. The church of El Triunfo, the first Christian church in Cusco is situated at the right of the cathedral and the church of Jesus Maria is located on the left. Many people throng the plaza especially when it’s lit up in the evenings.
No trip to Cusco would be complete without visiting one of the most famous archeological sites in the world, the Inca city of Machu Picchu. Situated at an altitude of 2,438 meters in a jaw-dropping location, the 15th-century citadel is a marvel of Inca engineering. Massive crafted stones fit together in near perfection all over the site which has sixteen stone waterfalls in sequence down one side of the site. Although Machu Picchu was located only about 80 kilometers from Cusco it remained undiscovered by the Spanish and thus escaped the plunder. The site can be visited by taking a bus from the town of Aguas Callientes or by trekking the Inca trails.
The grand cathedral is also another of Cusco’s buildings that were built on the site of an Inca palace. Construction on the cathedral began in 1559 and took more than a century to complete. The cathedral has one of the greatest collections of colonial art from the Cusco school which is noted for its combination of 17th-century European styles and iconography of indigenous Andean artists. One of the most famous paintings of this school of art is located in the far east corner of the cathedral. The painting of the Last Supper by Quechua artist, Marcos Zapata has been embellished with Andean features, specifically the food on the table. The oldest surviving painting in Cusco which shows the entire city during the earthquake of 1650, is also located in the cathedral
About 6.5km from Cusco at an altitude of 3,700 meters in the Sacred Valley is another interesting archaeological site of the Incas whose exact function still remains a mystery. Tambomachay is also known as El Bano del Inca or the Bath of the Inca possibly because of the constantly flowing water. A series of small aqueducts, waterfalls, and canals run through the terraced rocks of the region fuelling speculation that it could have served as a spa of sorts for the Inca ruler and the nobility. However, the precise water features of the site could also mean that it had a ceremonial function. Other archaeologists feel that the site could have been used for military purposes because of the terraced nature of the site.
To the northeast of the Plaza de Armas you’ll find the area of San Blas which sits higher up on a hill. The little streets from the north of the Plaza lead to this quaint neighborhood with its narrow cobblestone streets and small art galleries. The walk up the narrow streets is especially interesting as they are lined by ancient Inca walls. Along the east wall on the ancient Hatunrumiyoc is the famous 12 sided stone. San Blas Plaza is a popular spot for visitors, especially on Saturdays when colorful market stalls fill the area. At the end of the plaza is the simple San Blas church notable for its beautiful baroque, gold-leaf principal altar.
San Francisco Church
One of the few churches in Cusco that did not require a major overhaul after the earthquake of 1650 is the austere San Francisco Church. Located a few blocks southwest of the Plaza de Armas, the church although not as spectacular as the other churches in Cusco, has a few notable features. The large collection of colonial religious paintings and the carved cedar choir are some of the highlights of a visit here. There are also two crypts where the bones of their occupants have been arranged in patterns, a common feature of Franciscan churches. A museum attached to the church has a huge painting of the family tree of St Francis of Assisi, said to be the largest painting in South America, measuring 9m by 12m.
Puka Pukara is an Inca fortress which literally means ‘red fortress’ probably from the red color of the stones at dusk. This fortress which is located about 7kms from the city was part of the defense of Cusco. It is composed of large walls, staircases, and terraces. The fort overlooks the Cusco valley, providing one with spectacular views. There is not much information about Puka Pukara just like many of the Inca sites, but the popular theory is that it was partially a military base.
Basilica de la Merced
The third most important colonial church after the Cathedral and La Compania is the Basilica de la Merced which was also destroyed in the earthquake in 1650. The original church was built in 1536 but the present church was built between 1657 and 1680 after the earthquake. The tombs of Diego de Almagro and Gonzalo Pizarro, two of the most famous conquistadors are located on the far side of the cloister. A small religious museum containing vestments purportedly worn by conquistador and friar Vicente de Valverde is also located here. Visitors are especially drawn to the most famous possession of the museum, a priceless solid gold monstrance. This beautiful 1.2m high piece has 1500 diamonds, 600 pearls and is covered with rubies and emeralds.
About 50kms northwest of Cusco on a high plateau at a height of 3,500 meters is this gorgeous archaeological site containing unusual Inca ruins. Located in a remote area of the Sacred Valley, the terraced circular depressions with an irrigation system that make up this site are definitely a sight to behold. The largest depression is about 30 meters deep. The real purpose of the site is not known but it is believed to be an agricultural laboratory of sorts given the microclimate that exists between the terraces. Whatever the purpose of the site, there is no denying the mysticism and beauty of this agricultural amphitheater.
Just So You Know:
- Had Cusco still been Peru’s capital, it would have been one of the highest capital cities in the world.
- The charred statue of the crucifixion of Jesus known as Senor de los Temblores (Lord of the Earthquakes) is popularly believed to have saved the city from any further damage during the 1650 earthquake. To honor the patron saint of Cusco, an annual procession takes place on Holy Monday through the Plaza de Armas.
- Seeing an alpaca walking the streets of Cusco is a perfectly normal sight.
- The cute little guinea pig that people keep as pets is a delicacy in Cusco called ‘cuy.’ The dish is even featured on the painting of the Last Supper at the Cathedral.
Get Some Culture:
- Museo Inka – One of the best places to learn more about the Incas is at this modest museum housed in one of the finest restored colonial houses, northeast of the Plaza de Armas. It has a beautiful collection of artifacts including the largest collection of ‘queros’ (ceremonial wood cups).
- Inti Raymi (Festival of the Sun) – This is, without doubt, the most important event in Cusco where the whole city turns out to celebrate. It features a reenactment of the winter solstice of Sacsayhuaman and takes place every year on June 24th.
Grab A Bite:
- Jack’s Cafe – One of the most popular cafes on the street leading to San Blas is this American style cafe with Australian roots. If you’re looking for a great place for breakfast this is the place. The cafe has a fairly extensive menu and serves all other meals too.
- Marcelo Batata – To indulge in some delectable Andean cuisine with a twist, Marcelo Batata is the place to head to. The specialty here is Criolle and Andean cuisine using traditional Peruvian ingredients. |
Of Canada’s approximately 7,600 annual fires, just over half are started by people, most by accident, and just under half by lightning. Being generally less accessible, fires started by lightning are about 10 times as large as human-caused fires. The average area burned in Canada each year is about 2.3 million hectares, or 0.6 per cent of the country’s forest. Half of all recorded fires fail to reach 0.01 ha in size but occasional fires exceed 100,000 ha. In fact, 2 per cent of fires account for 98 per cent of total burned area. The number of fires is fairly steady from year to year, but annual burned area varies greatly. For example, in 2013, over 4.2 million ha burned, whereas in 2009, the number was just over 781,000 ha. The boreal forest makes up the bulk of the burned area, most of which is not seen by the average citizen.
Forest fires are classed mainly according to whether they remain on the ground or rise into the crowns of trees. Combustible matter in surface fires includes leaf litter, dry moss and lichen, dead grass, decomposing humus, dead or rotting wood, and live brush and herbs. In terms of crown fires, combustibles include live foliage, dead branches, and small twigs. In Canada, only conifer forests will support crowning combustion.
A forest fire is physically described by its rate of advance downwind, by the weight of combustibles consumed, and by its frontal intensity. The latter is the rate of energy output per unit length of the fire's front, expressed in terms of kilowatts per metre (kW/m). Frontal intensities range up to 150,000 kW/m in crown fires, with flames of 50 m or so, but a gentle surface fire may produce less than 100 kW/m, with flames no higher than 0.5 m. The maximum known downwind rate of advance is about 100 m/min (6 km/h), but few fires spread faster than 25m/min and most spread at less than 10 m/min. This immense variation in behaviour depends on the moisture content of the combustible matter as affected by past and present weather, the current wind speed, and the type of forest. Similarly, the great swings in the annual burned area (nationally and from region to region) are primarily due to variation in the national pattern of spring and summer weather.
Canadian forest fire agencies prevent an immense amount of burning that would otherwise occur; nevertheless, after weeks without rain, on hot, dry, windy days, it is impossible to prevent a few fires from becoming very large. The Canadian Forest Fire Weather Index (FWI) System is used throughout Canada to measure daily the susceptibility of the forest to fire. Based on its output, the Canadian Forest Fire Behaviour Prediction (FBP) System provides numerical estimates of spread rate and frontal intensity in specific forest types. All this information is used by the fire-control agencies to plan their day-to-day preparedness levels and fire-control operations.
Fire management on most forest land in Canada is the responsibility of provincial or territorial forestry agencies. The Canadian Interagency Forest Fire Centre, established in Winnipeg in 1982, provides daily information during forest-fire season, keeps statistics, and coordinates inter-agency exchanges of fire-control forces and equipment. The federal government's Canadian Wildland Fire Information Systemcollects and processes daily weather data and provides fire-danger ratings to the various control agencies. The Canadian Forest Service, in collaboration with provincial and territorial fire management agencies, universities and industry, carries out most forest fire research in Canada, including the development of systems for danger-rating and management.
Fire detection in Canada is usually performed by aerial patrols, along planned flight patterns. The patrols are backed up by systems of lightning detectors that pinpoint probable fire locations. Fire-control methods include aerial water bombing (sometimes with fire-retardant additives) and, on the ground, the use of portable water pumps with hose lines, tank trucks, bulldozers and hand tools. Burning out from a prepared line to stop an oncoming fire is sometimes feasible. All Canadian forest fire agencies utilize a range of computer programs based on the outputs of lightning-detection and remote-sensing equipment, the Forest Fire Weather Index, the Forest Fire Behaviour Prediction System, historical fire-data analyses, and topographical maps. These programs guide resource deployment, design aerial patrol layout, predict fire occurrence and model fire growth.
The practice of prescribed burning has a place in Canadian forest management. Prescribed fires, set deliberately by professionals to burn a specified area under chosen conditions, are used in site preparation following boreal forest clear-cuts, in connection with partial cutting in certain pine forests and for various other purposes in vegetation management.
Fire, along with climate and soil, is one of the three primary natural factors that have shaped the present Canadian forest. Much of this forest is, in its natural state, ecologically dependent on recycling by random periodic fire for its long-term stable existence on the landscape. Exceptions to this pattern include the southeastern hardwood forest, forests in the wetter areas of the East and West Coasts, and forested bogs and swamps in general. In the boreal forest, for example, the main tree species are black spruce, jack pine, lodgepole pine, trembling aspen and white birch, all of which are adapted to regenerate even after all individuals over a large area have been killed by fire. Aspen suckers grow directly from its root systems, while other hardwoods sprout from the base of dead trees. Jack and lodgepole pines and black spruce store live seed in their crowns for years, only shedding them after the cones are opened by heat from a fire.
Other prominent species, such as red and white pine, white spruce and Douglas fir, require ground that has been prepared and opened up by fire for optimum regeneration, but some individuals must survive to supply seed. In pre-contact times, ignition was mainly by lightning, and, without control, perhaps two to three times as much area burned annually as at present. Ecologically, then, fire is neither good nor bad, but simply an environmental necessity for the perpetuation of the forest in its natural state.
It is difficult to measure to what degree forest fire represents an economic loss to forest-dependent industries. However, the cost of fighting forest fire may be calculated, and the government estimates the figure at about $500 million to $1 billion per year. (See also Forest Economics.)
The ecological realities of fire create a dilemma in large natural parks and other unmanaged areas because certain kinds of forests cannot be maintained in perpetuity in the absence of fire. The administrators of Canada's national parks are well aware of this problem and are developing operational combinations of fire control and prescribed fire to cope with it. The interaction of ecological and economic factors complicates forest-fire management in general, and debate is continuous about the optimum level of fire-control effort. The Canadian Forestry Association and the provincial forestry departments carry out fire-prevention programs aimed at educating people about their responsibilities toward the forest.
See also Forestry. |
Brain Stem GliomasGliomas are formations of nerve tissue; they develop from supporting glial tissue and are the most common brain tumors. In most cases, gliomas are not composed of a single type of cell, but several types, but the name is based on the tumor cell type predominant. Gliomas grow by infiltration.
These gliomas are localized in the inferior part of the brain, where it is tied to the spinal cord. This part of the brain is responsible for many functions which if not working properly, the patient’s life is in big danger. This diagnosis means that there is a malignant tumor which can spread.
One of the specific features of these gliomas is that they are mostly encountered among very young people (younger than 20 years of age) with the higher frequency between 5 and 10 years old. If occurring in adults, the chances are that it already formed metastases.
The most common symptoms of this disease are:
- Visual impairment: double vision
- Swallowing impairment
- Muscle weakness
- Inability to walk
- Acute headaches
- Extreme tiredness
- Behavioral and personality chances.
Other symptoms may occur due to the proximity of the tumors to structures responsible for blood pressure control, breathing, motor functions, and others.
The symptoms are relatively aggressive and the patient should immediately see the doctor. A MRI (magnetic resonance imaging) scan can easily confirm or infirm the presence of the tumor. In most of the cases biopsy is not needed.
As far as treatment is concerned, unlike in case of other tumors, surgical intervention is not often performed. This is due to the localization of the tumor, which makes it very difficult to extract it without affecting other near structures. In most of the cases, the common treatment options are radiotherapy and chemotherapy. However, medical development in microsurgical techniques can be very effective in some of the cases. But unfortunately the chances of healing are not very high. The first step in the treatment is to reduce brain pressure. As mentioned, these types of tumors, which arise from glial brain cells, mostly affect teenagers and children. Both radiation therapy and chemotherapy have side effects on the young patient.
|brain, cancer, cases, common, gliomas, patient, treatment, tumor, tumors, types| |
Using sea slugs as models, scientists someday may be able to design learning protocols that improve long-term memory formation in humans, a new study suggests.
The researchers used information about biochemical pathways in the brain of the sea slug Aplysia to design a computer model that identified the times when the mollusk’s brain is primed for learning. They tested the model by submitting the animals to a series of training sessions, involving electric shocks, and found that Aplysia experienced a significant increase in memory formation when the sessions were conducted during the peak periods predicted by the model.
The proof-of-principle study may someday help scientists discover ways to improve human memory, the researchers said.
"This is very impressive," David Glanzman, a neurobiologist at the University of California Los Angeles, said of the study, in which he was not involved. "If someone had asked me ahead of time, 'Are you going to be able to improve learning if you model these two pathways?' I would have predicted no."
A simple brain
Scientists have been studying the brain of Aplysia since the 1960s, and the animals have revealed many secrets of learning and memory in humans. The sea slug's central nervous system is relatively simple, with only 10,000 neurons, compared with the approximately 100 billion found in humans, explained the study lead author John Bryne, a neurobiologist at the University of Texas. Moreover, Aplysia's neurons are large and easily accessible.
"You can work out its neural circuitry and behavior, and then you can train the animal and look for changes that are associated with learning," Bryne told LiveScience.
Learning in Aplysia takes the form of what scientists call sensitization. When researchers poke the animal or give it an electric shock, the sea slug will pull in its siphons, which are funnel-like appendages. An untrained slug will retract its siphons for only a few seconds, but as the animal learns that its environment is dangerous, it will hold in its appendages for longer times.
Periodically poking the slug causes apparent changes in its neurons, allowing the animal to form a memory that lasts for more than a week (a considerable time for animals that live only a year).
In the 1980s, researchers discovered that training Aplysia with five pulses, one administered every 20 minutes, effectively helped the animals produce long-term sensitization memories. Since then, scientists have learned that the activation of two proteins is critical for the sea slug to develop these memories.
Creating a model
Bryne and his colleagues wondered if they could come up with a better learning protocol to stimulate memory formation, by entering into a computer simulation their information on the biochemical pathways that activate these two proteins.
"We told the computer, 'Run simulations with these five training trials, but try every different permutation of the intervals between the trials to find ones that maximize the reactions,'" Bryne said.
The computer determined that trials (or electrical pulses) given at intervals of 10, 10, 5 and 30 minutes would optimize the biochemical reactions.
When the researchers tested this enhanced protocol with live sea slugs, they found that the animals still remembered the shock after five days; the slugs didn't remember the shock when it was administered at standard 20-minute intervals.
They also tested their protocol in cultured cells. They removed the sensory neurons and motor neurons — which control reflexes — from slugs' brains and allowed the cells to re-establish connections in a cell culture. They replaced shock with serotonin, a neurotransmitter that facilitates connections between the two types of neurons during reflexes.
The researchers found that serotonin pulses given with both protocols produced long-term changes in the strength of the connections between neurons, but the enhanced protocol resulted in connections that were stronger and lasted longer.
Proof of principle
"I think it's a very exciting study," said Samuel Schacher, a neurobiologist at Columbia University, who was not involved in the new research. "But whether or not this can be taken advantage of in people, at least from a neurobiological point of view, is an open question." The Aplysia brain has been heavily studied, he said, but scientists have a much less complete understanding of how particular neural systems in human, and other mammalian, brains work.
Schacher said the study "will be something that will encourage lots of research and approaches down the road," and perhaps its principles can be applied to humans in 10 years.
Bryne stresses that the study is a proof of the principle that scientists can come up with a better learning protocol if they have sufficient information about the biochemical reactions in the brain.
"We currently use drugs to improve memory, but drugs have undesirable side effects," he said. "This shows that there may be an alternative way to enhance memory that can potentially be taken to the classroom situation."
The study was published online Dec. 25 in the journal Nature Neuroscience.
Related on LiveScience: Top 10 Mysteries of the Mind
Copyright 2011 LiveScience, a TechMediaNetwork company. This story is republished here with permission. |
The Atlantic slave trade involved an estimated 12.7 million enslaved Africans and lasted nearly four centuries, while the Indian Ocean trade included more than a million people, but began earlier and continued longer. Over one quarter of those victims boarded slave ships after 1807, when the British and US governments passed legislation curtailing (and ultimately banning) maritime human trafficking. As world powers negotiated anti-slave trade treaties thereafter, British, Portuguese, Spanish, Brazilian, French, and US authorities began seizing ships suspected of prohibited trade, raiding coastal slave barracks, and detaining newly landed slaves in the Americas, Africa, Atlantic and Indian Ocean islands, Arabia, and India. In this process, naval courts, international mixed commissions, and local authorities decided the fates of the survivors around the Atlantic and Indian Ocean littorals. Between 1808 and 1896, this judicial network emancipated roughly 6 percent of an estimated 4 million enslaved Africans. This website retraces the lives of over 250,000 people emancipated under global campaigns to abolish slavery, as well as thousands of officials, captains, crews, and guardians of a special class of people known as "Liberated Africans."
Key Locations of British Abolition Efforts After 1808
This network of international courts produced extensive documentation about tens of thousands of people victimized by the slave trade. These records are scattered in many archives and are written in multiple languages. Each case adjudicated before these courts usually contains information about the condition of enslavement along the coast of West Africa, the events leading up to the seizure of the slave ship, and the judicial process resulting in emancipation, which was usually followed by periods of indentured servitude lasting several years.
The most fascinating historical evidence these courts produced were registers of Liberated Africans. These records amount to descriptive lists of people physically removed from slave ships, or captured close to the African coast. The worldwide collection amounts to detailed records for over 100,000 individuals. The data includes their African names, aliases, age, sex, height, a brief physical description, among other details worthy of historical analysis. Beyond doubt, the scale of record-keeping in multiple languages enables an unprecedented analysis of: 1) a major branch of the African diaspora; 2) the socio-economic development of the Caribbean; 3) slavery as a crime against humanity; 4) a global human rights movement; and 5) complex meanings associated with "identity," "slavery," "indentured servitude," and "freedom."
The need for collaborative research related to the global diaspora of Africans and their descendants is challenging because the documentation is extensive, multilingual, and scattered around the world in hundreds of archives, libraries, churches, courthouses, government offices, museums, ports and personal collections. The overall aim of this project is to bring together as much data as possible regarding the transnational links between these international courts and piece together the lives of over 250,000 Liberated Africans. Unfortunately, the exact number of courts, cases and people involved in the process of abolitionism, and indeed when, where and how many Liberated Africans resettled around the world, remains unclear clear. The long-term goal of this project will resolve these issues through the reorganization of voluminous documentation generated during hundreds of trials; and by following individuals before, during, and after emancipation. The contributors of this project are constantly searching for source materials related to this theme from around the world, and are working hard to upload as much data as possible on an ongoing basis. For more information about the sources click here. |
Oral communication is the process of verbally transmitting information and ideas from one individual or group to another. It's the process in which messages or information is exchanged or communicated within sender and receiver through the word of mouth. It includes individuals conversing with each other, be it direct conversation or telephonic conversation. Speeches, presentations, discussions are all forms of oral communication. Oral communication is generally recommended when the communication matter is of temporary kind or where a direct interaction is required. Face to face communication (meetings, lectures, conferences, interviews, etc.) is significant so as to build a rapport and trust.
Formal and informal communication
Oral communication can be either formal or informal. The Formal Communication is the exchange of official information that flows along the different levels of the organizational hierarchy and conforms to the prescribed professional rules, policy, standards, processes and regulations of the organization. The formal communication follows a proper predefined channel of communication and is deliberately controlled. It is governed by the chain of command and complies with all the organizational conventional rules. (“Formal communication”, 2018)
Informal communication is the casual and unofficial form of communication wherein the information is exchanged spontaneously between two or more persons without conforming the prescribed official rules, processes, system, formalities and chain of command. The informal communications are based on the personal or informal relations such as friends, peers, family, club members, etc. and thus is free from the organizational conventional rules and other formalities. In the business context, the informal communication is called as a “grapevine” as it is difficult to define the beginning and end of the communication. (“Informal communication”, 2018)
Effective Oral communication
The main rules of proper oral communication include:
- Preparation: Before communicating orally the speaker should take preparation ( physically and mentally)
- Clear pronunciation: Clear pronunciation of message sender is the main factor of oral communication. If it is not clear, the aim of the message may not be achieved
- Unity and integration: The unity and integration of the speech is a must for successful oral communication.
- Precision: Precision is needed to make oral communication effective. The meaning of the words must be concrete.
- Natural voice: The speaker's voice must not be fluctuated at the time of oral communication. On the other hand, artificial voice must be avoided.
- Avoiding emotions: At the time of oral discussion, excessive emotions can divert a speaker from main subject. So, the speaker should be careful about emotion. The speech must be emotionless
- Efficiency: Speakers efficiency and skill is necessary for effective oral communication.
Oral Communication in business
Among desired skills and attributes, communication is often the dominant issue, both generally and in business specifically. Communication determines academic and career and organisational success. Emerging research on graduate employability which indicates communication skills are equally important in less developed regions such as India and China and within business, communication is critical for successful job performance and organisational achievement. In the UK, the recent drive for developing entrepreneurial effectiveness in new graduates acknowledges the important role of communication in ensuring graduates are able to network, negotiate, build trust and articulate ideas and information within industry. Precisely which elements of the oral communication skill set are most required by industry has been subject to considerable review impeded by ambiguities in the exact meaning of the skill components, a problem common to many targeted employability skill. (D. Jackson, 2014).
Oral communication at workplace. In the context of workplace communication, having good communication skills is the way to success and that good communication skills are seen as fundamental and an additional merit. In the perspective of employers, oral communication skills are very important for managers to possess in order to carry out tasks efficiently at the workplace. The significant role of oral communication skills in multinational companies was reported by between 71 and 80 percent of the respondents. They ranked the usage of oral communication skills in multinational companies as follows; telephone conversation, informal work related discussions, meetings, giving oral presentations, explaining and demonstrating to subordinates and other colleagues. Higher Institutions in Malaysia are seeking to generate creative managers in different fields to meet the needs of thriving resources in various situations. In fact, at present, one of the key objectives in Malaysian higher institutions is to generate large number of graduates with high ability to communicate effectively at the workplace. To achieve this goal, higher institutions are attempting to design programs with a focus on communicative skills. (M. A. Moslehifara, N. A. Ibrahim, 2012)
- Business Jargons, 2018 Formal communication
- Gray E. F., 2010, Specific Oral Communication Skills Desired in New Accountancy Graduates Business Communication Quarterly, 40
- Moslehifara M. A. , Ibrahim N. A., 2012, Language Oral Communication Needs at the WorkplaceProcedia - Social and Behavioral Sciences, Vol.66
- Mojibur Rahman M., 2010 Teaching Oral Communication Skills: A Task-based Approach ESP World, Vol. 1, No. 27
- Jackson D., 2014, Graduate performance in oral communication skills and strategies for improvementThe International Journal of Management Education, Vol.12, No.1
Author: Katarzyna Górna |
Scientists in Australia announced a startling discovery this week when they revealed that a piece of rock brought back by the crew of the Apollo 14 moon landings was actually originally from Earth.
Writing in the journal Earth and Planetary Science Letters, the scientists suggested that the rock may have been part of debris catapulted to the moon from Earth after an asteroid collided with our planet billions of years ago.
According to The Guardian, the rocks were gathered on the Apollo 14 mission, which was launched in 1971 and was the third space mission to land successfully on the moon. Astronauts Alan Shepard, Stuart Roosa and Edgar Mitchell spent several days orbiting the moon conducting scientific experiments and observations, and Shepard and Mitchell took part in a 33-hour space walk on the surface of the moon itself.
In addition, the astronauts brought back over 42kg of rocks with them. This haul of lunar material has provided us with a significant amount of important data about the composition and development of the moon itself.
However, recent analysis of some of these materials has revealed that at least one of the lunar rocks picked up by Shepard and Mitchell may actually have originated on Earth.
According to Professor Alexander Nemchin, from the School of Earth and Planetary Sciences at Curtin University, Western Australia, the composition of one of the rocks found on the moon is very similar to granite, with significant amounts of quartz inside. While quartz often occurs naturally here on Earth, it is extremely rare to find it on the moon.
In addition, the team also analyzed the zircon found in the rock, a mineral belonging to a group of neo-silicates that is found both on the earth and the moon. They discovered that the type of zircon found in the rock is consistent with terrestrial forms, but not with anything previously found in lunar material. The scientists concluded that the rock had been formed in an oxidizing environment, conditions that would be extremely unusual on the moon.
Nemchin suggests that these findings present strong evidence that the rock was not actually formed on the moon, but rather that it originally came from Earth. Although he did not rule out the possibility that the rock could have formed under briefly occurring similar conditions on the moon, he argued that this would be extremely unlikely.
Instead, the team put forward another hypothesis. They suggested that it is possible for the rock to have been transported to the moon after its formation, possibly as a result of an asteroid collision with the Earth billions of years ago.
According to this theory, the asteroid crashed into the earth billions of years ago, catapulting debris and rocks into space, some of which would have landed on the moon.
This theory would explain why the rock seemed to have a chemical composition consistent with terrestrial rather than lunar planetary conditions. It is also consistent with theories about the nature of bombardment that modified the Earth billions of years ago.
According to The Guardian, scientists believe that in the early stages of the Earth’s formation it may have been hit by asteroids and meteorites, causing significant disturbance to its surface.
Furthermore, during this period it is thought that the mood was at least three times closer to the Earth than it is today, making it very likely that the moon would also have been hit by flying debris as a result of these collisions.
If this theory is correct, the rock brought back by the crew of Apollo 14 is one of the oldest terrestrial rocks ever discovered. Analysis of the zircon dated the rock to around 4 billion years of age, slightly younger than the oldest known Earth rock, a zircon crystal discovered in Western Australia.
These ancient pieces of stone may appear to be small, unassuming rocks, but they have the potential to transform our understanding of the Earth in its earliest stages of development. |
Kurunjang Secondary College’s whole school approach to literacy allows all students at Years 7 and 8 to develop their literacy skills.
Literacy skills are all the skills needed for reading and writing. They include such things as awareness of the sounds of language, awareness of print and the relationship between letters and sounds. Other literacy skills include the development of vocabulary, spelling and reading comprehension.
Every student is tested to find out exactly what skills need to be developed. According to their test results, students are placed into literacy groups. Each group has a specially structured program of lessons to build students’ skills.
If students are identified as needing support with letters, sounds and word recognition they are provided with intensive assistance to boost their skill. Students also complete oral language activities, comprehension activities and choral reading to bolster their fluency.
Students who experience difficulty in making meaning of what they read are explicitly taught the skills involved in the reading process. Students engage a number of comprehension strategies such as, Reciprocal Teaching. The oral language activities at this level are designed to give students more opportunities to use language formally and participate in discussion. Furthermore, students who are reading at the expected level for their age or above are catered for with critical thinking strategies. The aim is to challenge students to think about how best they learn and to adopt a variety of learning strategies to improve their ability to retain information or acquire knowledge.
Kurunjang Secondary College’s literacy program is research based and adheres to a robust structure of tailored lessons and activities all of which ensures that students are receiving the right level of support and challenge to achieve success as learners. |
Sakya, sometimes also Sakka, was both the name of a region and the clan of people who lived there. The Buddha was a Sakyan. Sakya was a small chiefdom situated between the much larger kingdom of Kosala and the confederacy of Vajji and which corresponds to the north-east corner of the modern north Indian state of Uttar Pradesh. According to the legend, the Sakyans took their name from the saka tree, Tectona grandis, the Indian teak (D.I,93); see picture. Sakyans were a people of the ancient Adicca linage, they belonged to the warrior caste, were known for their pride and impulsiveness and were considered rustics by their neighbours (D.I,90; II,165; Sn.423). A group of Sakyan youths are reported as saying of themselves: `We Sakyans are proud' (Vin.II,183), and Upàli said of them that they are `a fierce people' (Vin.II,182). The Buddha described his kinsmen as `endowed with wealth and energy' (dhanaviriyena sampanno, Sn.422).
Although nominally independent, the Sakyans were under the influence of their eastern neighbour. In the Tipitaka it says: `The Sakyans are vassals of the king of Kosala, they offer him humble service and salutation, do his bidding and pay him homage'(D.III,83). Towards the end of the Buddha's life this de jure independence came to an end when the Sakyan lands were invaded by and absorbed into Kosala. Even before this the Buddha described his homeland as belonging to the king of Kosala (Sn.422).
Legend says that the Buddha's father Suddhodana was a king of the Sakyans although he was probably more like an elected chief. The only Sakyan ruler mentioned is Bhaddiya who is described as Sakyaràjà and when it was suggested that he join his friends in becoming a monk said `wait until I hand over the kingdom to my sons and brothers'(Vin.II,182).
The Buddha once said to his monks that when others asked them whose philosophy they adhered to or which teacher they followed they should reply that they were `Scions of the Sakyan' (D.III,84), i.e. of the Buddha.
There is a community of people in Nepal called Sakya who claim to be the direct descendants of the ancient people, although historians consider this claim to be unfounded. |
This unit is designed to provide an introduction to pre-school and primary mathematics education by discussing and investigating current directions in mathematics education. The unit will introduce theories of learning mathematics and effective teaching and learning in the contexts of Number, Measurement and Space.
Emphasis will be placed on an approach to learning and teaching mathematics that is respectful of each child’s background and culture. The role of manipulatives, technology, language and mental processes in children’s developing concepts, understandings and skills will also be a focus, alongside the role of assessment interviews in identifying children’s current mathematical understanding. The use of this information to inform teaching, and develop positive attitudes to mathematics will be studied. |
During the very earliest days of the American republic, political parties formed to debate national policies, especially regarding debt, civil rights and war. While a feud between Thomas Jefferson and Alexander Hamilton created this first party system, a new partisan dynamic began to form in the mid-1800s. This second party system, which originated in a political conflict between John Quincy Adams and Andrew Jackson, resulted in rivalry between the Democrats and the Whigs.
Election of 1824 and the Corrupt Bargain
The origins of the Democratic and Whig parties were in the presidential election of 1824. Unlike in previous elections, 1824 did not have true political parties. The Federalist Party had collapsed after the War of 1812, and so five Democratic-Republican candidates separately ran for president. The result was an election in which Andrew Jackson won a plurality of the electoral college and popular vote, but failed to secure the majority needed to win the presidency. Henry Clay thus rallied the necessary House of Representatives votes needed to make John Quincy Adams president, and in return Clay was awarded the office of Secretary of State. Jackson supporters called this a "Corrupt Bargain," and rallied around their candidate. These "Jacksonian" opponents of Adams were the early members of the soon-to-be Democratic Party.
Andrew Jackson's Democratic Party
The Election of 1824 featured only one political party: the Democratic-Republicans. Having represented just one camp of the Democratic-Republicans, President John Quincy Adams solidified his supporters into a National Republican Party throughout his presidency. His party advocated large public expenditures on public projects, tariffs to protect national industry and a strong central bank. Jackson's supporters, however, opposed all of these, calling Adams' tariff a "Tariff of Abominations." To oppose Adams' proposals, Jackson's supporters coalesced into the Democratic Party, so-named because it followed Thomas Jefferson's calls for small, decentralized government.
"King Andrew" and the Whigs
John Quincy Adams lost his quest for re-election in 1828, to Democratic candidate Andrew Jackson. Jackson's Democratic Party championed itself as a party of the common people, even using popular campaign songs for the first time in history. For the first half of Jackson's presidency, opponents of the president were largely disorganized. By the mid-1830s, however, Jackson's opponents formed the Whig Party, named after its English anti-monarchy counterpart. They named themselves the Whigs because they believed Jackson had become a "King Andrew," and that he was creating an office of the president that was too powerful. They also opposed his policies, which they said were anti-commerce and anti-business.
The Second Party System
By the 1840s, the Whigs were as established a political party as the Democrats. Until the mid-1850s, the Whig-Democrat party system dominated American government. Democrats tended to represent rural regions, while the Whigs were a more urban party. The Whigs were also the more attractive party to opponents of slavery, though the party as a whole was not exclusively anti-slavery. Nationally, the Democrats survived after the breakout of the Civil War -- and continue as a national party today -- while the Whigs were consumed by the growing Republican Party.
- Medioimages/Photodisc/Photodisc/Getty Images |
Anencephaly is a defect in the closure of the neural tube during fetal development. The neural tube is a narrow channel that folds and closes between the 3rd and 4th weeks of pregnancy to form the brain and spinal cord of the embryo. Anencephaly occurs when the “cephalic” or head end of the neural tube fails to close, resulting in the absence of a major portion of the brain, skull, and scalp. Infants with this disorder are born without a forebrain (the front part of the brain) and a cerebrum (the thinking and coordinating part of the brain).
The remaining brain tissue is often exposed–not covered by bone or skin. A baby born with anencephaly is usually blind, deaf, unconscious, and unable to feel pain. Although some individuals with anencephaly may be born with a rudimentary brain stem, the lack of a functioning cerebrum permanently rules out the possibility of ever gaining consciousness. Reflex actions such as breathing and responses to sound or touch may occur.
The cause of anencephaly is unknown. Although it is thought that a mother’s diet and vitamin intake may play a role, scientists believe that many other factors are also involved.
Recent studies have shown that the addition of folic acid (vitamin B9) to the diet of women of childbearing age may significantly reduce the incidence of neural tube defects. Therefore it is recommended that all women of childbearing age consume 0.4 mg of folic acid daily.
The prognosis for babies born with anencephaly is extremely poor. If the infant is not stillborn, then he or she will usually die within a few hours or days after birth.
There is no cure or standard treatment for anencephaly. Treatment is supportive. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.