text stringlengths 255 17.6k |
|---|
This week’s science project is pretty simple and straightforward, but it’s another good illustration of how heat affects molecular movement. It’s based on this NASA demonstration.
To do the demonstration with your own students, you’ll need three basins, an empty plastic water bottle, an empty glass soda bottle, and two balloons.
Put a quart of ice water in the first basin, a quart of room-temperature water in the second basin, and a quart of hot water in the third basin. (I used my coffeemaker to heat the water for the third basin, but you could just as easily use a microwave. You want it hot but not boiling.) I let the kids get the ice water and room-temperature water for me, but I handled the hot water myself to make sure nobody got scalded.
Ask two students to prepare the bottles by stretching balloons over the mouths. Once the bottles are ready, ask the class to predict what will happen when you place each bottle in each basin.
Have a student start the experiment by holding the glass bottle in the cold water for a minute or two. Repeat the process with the room-temperature water and the hot water. Observe what happens each time, and let the students discuss their theories about what is happening and why. Do the same experiment with the plastic bottle, and discuss the results and the students’ theories.
The balloons will do nothing in the cold and room-temperature water, but they should begin to inflate in the hot water. This is because air molecules move faster and spread apart, causing the air to expand, when it is heated. Warmed by the water, the air will expand into the balloon, inflating it. (This is basically how hot-air balloons work — the heated air expands into the balloon, which rises because the hot air is less dense than the cooler air around it.)
The balloon stretched over the plastic bottle will inflate more quickly than the balloon stretched over the glass bottle. My kids correctly surmised that this is because the glass is much thicker than the plastic and thus insulates the air better and slows down the transfer of heat from the water outside the bottle to the air inside it.
Like most hands-on science projects, this one is great for your tactile and visual learners. |
MATHS AT BEWLEY
Mathematics involves the development of logical thinking and the ability to make sense of the world. Numeracy is a key life skill, a tool for everyday life, a means of communicating information and of solving practical problems. Therefore, from earliest stages, we want children to become numerate, to be encouraged to apply their knowledge and skills to real-life situations and, consequently, to see the relevance of what they are being taught.
The National Curriculum for Maths emphasises mental agility and understanding the number system. Numeracy is a proficiency which involves confidence and competence with numbers and measures. We therefore want children to acquire a repertoire of computational skills and an ability to solve problems in a variety of contexts.
The strands of maths studied during primary school will include:
Investigations, problem-solving, games (including on PCs and tablets) and practical apparatus are additional resources and activities that will extend and enrich the children’s experiences in mathematics. In the new National Curriculum, efficient methods of calculation are introduced earlier than in previous years.
Mathematics is linked, where possible, to other areas of the curriculum (e.g. Science, Geography, History, PE), and children’s work can extend out of the classroom into the outdoor learning environment.
Children are required to learn key number facts by the end of given year groups and your support with this through helping with homework is much appreciated e.g by the end Y2 children should have embedded knowledge and speed recall of number bonds to 20 and be able to apply these when solving more complex problems with larger numbers e.g 13+7=20 therefore 130+70=200. By the end of Y4, all children should have speed recall of the multiplication facts to 12×12 and their derived division facts. By the end of Y6 children should be able to apply their knowledge of mathematics to solve complex problems including fractions, decimals, percentages and ratio.
As well as the number system and calculations using all four operations (addition, subtraction, multiplication and division), algebra and geometry, statistics, measures and data handling are all key elements of the maths curriculum. The application of maths skills in problem solving is a key feature of lessons and children are taught to try to solve problems from an early age. By the time they are in Y3, children should be able to solve simple problems and before they leave us, children need to able to solve written problems involving two or more operations (multiple step problems) e.g. Sandwiches cost £1.70 each, whilst drinks cost £0.99. A family buys four sandwiches and three drinks. How much change will they get from £10.00?
Our aim is to provide a positive attitude towards the enjoyment and appreciation of mathematics. We want to give pupils a firm basis of knowledge and skills so that they become numerate and are confident to tackle real life maths problems now and in the future.
Please see below for further information. |
What You'll Find in This Section
When planning a lesson for a class that includes English language learners (ELLs), it may be difficult to know how to help students at different language levels participate fully in the activities. However, there are a number of steps you can take to help strengthen students' language development and engage them, no matter their level of language proficiency.
There are a number of ways to support, or scaffold, instruction for ELLs, even if they are at beginning levels of English proficiency. These ideas from veteran educators can help make content more accessible and provide students with an opportunity to participate in all classroom activities.
One of the most critical components of helping English language learners (ELLs) succeed academically is the role of background knowledge. Lessons, reading passages, and test questions that assume prior knowledge or familiarity with a certain experience, person, or object may not be an appropriate tool for ELLs who lack the required background knowledge to understand the content.
Each student comes to school not only with unique academic needs but also with unique background experiences, culture, language, personality, interests, and attitudes toward learning. Effective teachers recognize that all of these factors affect how students learn in the classroom, and they adjust, or differentiate, their instruction to meet students' needs.
Here are some strategies for differentiating instruction for your English language learners, as well as ideas for taking students' level of English language proficiency into account when planning instruction.
Assessment plays a variety of roles in the instruction of English language learners (ELLs). One of the most important uses of assessment is informal, ongoing assessment throughout the school year (also called formative assessment) to monitor student learning and target areas of instruction. This can be as simple as asking students to show "thumbs up or thumbs down" to show their understanding or asking students to share one thing they learned on an exit slip at the end of class.
Classmates are a valuable resource in helping English language learners succeed, whether by showing students around the school on their first day or serving as a buddy in the clasroom. Peers can help build student confidence and also act as language models, giving ELLs a chance to practice their new language skills in a low-stress setting.
Need some help finding ways to help your English language learners? Read these inspirational stories about English language learners, teachers and paraprofessionals who have overcome obstacles standing to achieve success.
What's even better than a bright idea? A bright idea that works! Educators from across the country have discovered excellent ways to tackle some common classroom stumbling blocks. Below are their step-by-step suggestions on how to handle issues like the fourth-grade slump or the development of critical thinking skills.
Designing instruction based on student strengths (what the student can do) provides an important foundation for success and offers opportunities to build upon those strengths in order to address areas where the student is struggling. Learn more about this approach from the following resources and videos. |
Move the point P to see how P' moves. Then use your insights to calculate a missing length.
A garrison of 600 men has just enough bread ... but, with the news that the enemy was planning an attack... How many ounces of bread a day must each man in the garrison be allowed, to hold out 45. . . .
An article for teachers which discusses the differences between ratio and proportion, and invites readers to contribute their own thoughts.
Two buses leave at the same time from two towns Shipton and Veston on the same long road, travelling towards each other. At each mile along the road are milestones. The buses' speeds are constant. . . .
Mo has left, but Meg is still experimenting. Use the interactivity to help you find out how she can alter her pouch of marbles and still keep the two pouches balanced.
If it takes four men one day to build a wall, how long does it take 60,000 men to build a similar wall?
Meg and Mo still need to hang their marbles so that they balance, but this time the constraints are different. Use the interactivity to experiment and find out what they need to do.
Meg and Mo need to hang their marbles so that they balance. Use the interactivity to experiment and find out what they need to do.
Mainly for teachers. More mathematics of yesteryear.
In the ancient city of Atlantis a solid rectangular object called a Zin was built in honour of the goddess Tina. Your task is to determine on which day of the week the obelisk was completed.
The triathlon is a physically gruelling challenge. Can you work out which athlete burnt the most calories? |
Create some simple number pebbles to make a lovely, natural play resource to add to the counting and maths manipulatives at home or school. Perfect for counting, ordering, addition and subtraction activities, as well as being freely available and encouraging play with a range of natural objects.
To make these number rocks we simply used a variety of pebbles collected from the beach and a permanent marker pen. We counted out 20 rocks and I added the numerals to each, and they were read to play with within seconds!
I set them out on a table top and Cakie started to play independently, making them into a little line and counting up from 1 as she did so, looking for the next pebbles to put in correct number order. She counted and recounted along the line as she got to the less familiar teen numbers, to check that she was looking for the right ones to go next. We talked about the fact that all teen numbers have a numeral 1 in front of them, and that we can often hear the next number, but not always. We also pointed out those tricky numbers 11 and 12 which don’t give any clue as to what they represent and tried to learn those by memory.
She checked and counted thoroughly and put all the numbers in order from 1 to 20. Then she wanted another game to play so we found some of the teen number pebbles and played a numeral matching game.
Can you make this number 17 using two other pebbles? Which ones do you need and which way around will you place them? What if you put the 7 in front? Can you read that number now? We repeated this for all of the teen numbers, one at time, until she had a good idea of each one by sight. With an older child you could use this activity to introduce the concept of 10s and 1s making up double digit numbers.
Then we played a fun addition game using some of the single digit pebbles. If we have 6 and now add 3 more, how many will we have altogether? Using other small, plain pebbles or counters as manipulatives to count out the numbers is the best way for young children to work through addition at this stage, so that they can see it in practical form. Then select the rock that represents how many were found altogether! Older kids could then write these number problems down onto paper or try adding three or more numbers, or move onto some teen numbers too.
We have added these to a basket to go with our other natural resources and will soon be putting together a maths inspired activity table, where these would make a perfect addition for open-ended, investigative play. |
In this video the instructor talks about Pythagorean theorem and how to apply it to right triangles. Pythagorean theorem states that in any right triangle if a and b are the lengths of the legs and c is the length of the hypotenuse the sum of the squares of the lengths of the legs is equation to the square of the length of the hypotenuse. That is a*a + b*b = c*c. This theorem works only for right angled triangle. So if you have the lengths of the two legs of a right triangle substitute the values in the above theorem to obtain the length of the hypotenuse which is greater than the length of the either of the legs. This video states the Pythagorean theorem and shows how to apply it to a right triangle. |
In 2015, the United Nations announced the adoption of 17 SDGs and 169 targets which aim to “Transform our world”. People are calling this the most important and largest global undertaking that will shape the world until 2030 and beyond. Under the banner of sustainable development, the SDGs merge the needs of humans and the environment under one global agenda that is designed to ensure the future of society and planet. Supporting sustainable development therefore requires societies to work together to meet these needs.
The world is currently facing a daunting double challenge: human populations and material demands are increasing while natural resources are declining at an alarming rate. The SDGs are therefore designed to provide transparency, information, and data on just about every possible aspect that reflects society’s attempts to live in a sustainable way. This is done through the SDG indicator framework which tracks and reports progress towards sustainable development.
Challenges facing water-related ecosystems
In pursuit of development, societies across the globe have negatively impacted water related ecosystems. A shocking example of this is that the rate of loss of global wetlands between 1900 and 2000 is estimated to be around 69-75% (Davidson 2014, see Figure 1). 40% of the loss occurred between 1970-2008, according to the 2015 Ramsar Convention on Wetlands. Increased water abstraction for growing urban populations was partly to blame, as was the transformation of wetlands – lands saturated by water, such as marshes or river deltas – into arable agricultural land.
Wetlands are in fact one of the most productive ecosystems that provide many of the services that society depends on, ranging from provision of food and water, regulation of flows of water, recycling of nutrients and waste,and cultural benefits, such as spiritual, recreational or educational uses.
Figure 1 illustrates the rate of decline of wetlands. Given the importance of these ecosystems, a massive global effort is required to restore and protect them, and for this the SDG 6.6.1 Indicator method seeks to collect the necessary information. But a significant question remains: what are the best ways to gather and assess this data?
Water-related ecosystems and the SDGs
Water-related ecosystems, including mountains, forests, wetlands, rivers, aquifers, and lakes, are vital to providing social and economic benefits for people. The declining condition of the ecosystems directly impacts water availability as well as other essential services such as biodiversity, food production and flood control. For that reason, Target 6.6 - By 2020, protect and restore water-related ecosystems, including mountains, forests, wetlands, rivers, aquifers and lakes - is set to contribute to the achievement of Goal 6. In service of that, indicator 6.6.1 - Change in the extent of water-related ecosystems over time - and its component sub indicators, will be used to monitor management performance in relation to the target.
How are water-related ecosystems monitored for the SDGs?
Monitoring of Target 6.6 represents what could become the world’s largest initiative to monitor and report on water-related ecosystems. This builds substantially on the monitoring of the spatial extent of wetlands that has been developed by the Ramsar Convention of Wetlands by including measures of quantity, quality and ecosystem health.
A report published by IWMI and WLE provides guidance to countries on the method used for reporting SDG Indicator 6.6.1, which monitors the “change in extent of water-related ecosystems over time.” This guideline supports the abbreviated but official UN Water indicator monitoring method.
The aim of the guideline is to support countries by:
- teaching them to collect and submit data on SDG 6.6.1 as part of their contribution to sustainable development
- providing guidance on the methods used to monitor water-related ecosystems
- supporting the management of water resources at a country level.
The guideline responds to each of the sub-indicators illustrated in Figure 3. Guidance is given for monitoring the spatial extent of water-related ecosystems such as vegetated wetlands, rivers and open water bodies such as lakes, artificial reservoirs, and groundwater aquifers. A large part of this guidance is devoted to techniques that use Earth Observation data to measure change in extent (see the iconic example in Figure 4, showing the loss in extent of the Aral Sea which is a freshwater lake).
The amount of water stored in a given ecosystem is another key indicator of ecosystem health. Quantity is thus reported for rivers and other open water bodies, like lakes and reservoirs, as well as for groundwater. In addition, monitoring the quality of water over time is crucial for determining any potential negative impacts to the ecosystem and thus to the many services provided by the ecosystem. Lastly, the state or health of the ecosystems is monitored through a system that evaluates the biota (e.g. fish, invertebrates or vegetation) as indicators of overall ecosystem condition. This process of measuring changes in the biotic communities helps society to detect overall changes to the ecosystem (see figure 5 for illustration).
The inclusion of Indicator 6.6.1 as part of the SDG Agenda represents a wonderful opportunity for society to gather and make use of data reflecting the state of water-related ecosystems. This will enable us to better manage these ecosystems in order to secure a sustainable future. The SDG 6.6.1 step-by-step method is in the process of being implemented globally as part of the Agenda 2030, and this guideline provides much-needed support to those who are involved in this submission or who wish to understand the measures necessary to achieve sustainable development. |
December 1, 1955 - The birth of the modern American civil rights movement occurred as Rosa Parks was arrested in Montgomery, Alabama, for refusing to give up her seat to a white man and move to the back section of a municipal bus. Her arrest resulted in a year-long boycott of the city bus system by African Americans and led to legal actions ending racial segregation on municipal buses throughout the South. Her quiet courageous act changed America, its view of black people and redirected the course of history.
"I would like to be remembered as a person who wanted to be free... so other people would be also free. "
- Rosa Parks
After her arrest, Parks she called local labor organizer E. D. Nixon to bail her out. The next day, the Rev. Martin Luther King Jr., the Rev. Ralph Abernathy, more than 40 other black ministers and a white minister named the Rev. Robert Graetz formed the Montgomery Improvement Association. They organized the bus boycott that began on Dec. 5 and lasted until December 20, 1956, when a U.S. Supreme Court ruling integrated the public transportation system.
More: Official website of Rosa Parks |
What Is Phonics?
Phonics is the study of the relationship between sounds and letters. It is an essential component of reading and writing practice and instruction in the primary grades. Phonics knowledgel eads to word knowledge. Along with plenty of experience reading, students begin to read words fluently with little effort.
Phonics teaching helps students to learn the written correspondences between letters, patterns of letters, and sounds. It should be noted that phonics is one part of a comprehensive literacy program that must also include practice in comprehension, fluency, vocabulary, writing, and thinking.
When Are Students Ready to Learn Phonics?
There are several things to consider before involving children in a formal phonics program. Language development is the first thing and children need an ability to recognise and produce speech sounds, and use language appropriately. Phonics works alonsdide with all these language systems.
Phonological awareness is a particularly important language skill to acquire before phonics instruction begins. Phonological awareness includes the ability to separate spoken language into syllables and individual phonemes, the distinctive sounds for the language the student is learning to read. Phonological awareness is learned through singing, tapping syllables, rhyming, and dividing words into individual sounds.
When children are ready to, and with support from home and school, they develop concepts of print that can be huge, such as learning the purposes of writing and illustrations; understanding what an author is; and identifying text features including the front and back of a book, uppercase and lowercase letters, reading top to bottom, reading left to write, return sweep at the end of a line, and the meaning of punctuation.
After students have heard stories read to them repeatedly, they try to point to the words as they say out loud their favorite memorised parts. Students develop a concept of word in text when they point accurately to the words as they recite the text. Concept of word in text develops in parallel with students’ phonics knowledge of letter–sound correspondences (e.g., learning that the letter b makes the b sound by repeatedly seeing b words in a text)
What is Letters and Sounds? |
The cell cycle
Cell division occurs in stages: interphase G1 – cell enlarges, nucleus migrates to center, protein synthesis; interphase S – DNA replication occurs; interphase G2 – preprophase band and structures of mitosis form, chromosomes condense. In M-phase mitosis, chromosomes replicate and divide to form two daughter nuclei. In M-phase cytokinesis, the cytoplasm divides and a cell plate and new cell walls form.
Cell cycle control
The cell cycle has two check-points: G2/M and G1/S. Progression through the cycle is controlled by cyclins that are synthesized and degraded through the cycle and activate cyclin-dependent protein kinases (CDPKs).
Meiosis occurs to produce haploid cells. It involves an extra round of cell division. The first phase, prophase to anaphase I, results in exchange of DNA between the pairs of chromosomes followed by their separation with both chromatids present. The second phase (metaphase II to telophase II) is a mitosis resulting in separation of the chromatids and the formation of four haploid cells.
Cell division in plants occurs in meristems (Topic C1) and involves two parts: mitosis in which the chromosomes are replicated and sorted into two nuclei, and cytokinesis in which the cell wall, cytoplasm and organelles divide. In dormant meristems, the cells rest in G0 phase. When conditions are correct, the cell begins the processes leading to division. The entire cycle may be considered as four phases, G1 , S, G2 andM(Fig. 1).
In G1 phase the cell doubles in size and new organelles and materials needed for two cells are formed. During this phase, the nucleus migrates to the center of the cell and is surrounded by a sheet of cytoplasmic strands called the phragmosome that bisects the center of the cell at the plane across which it will divide. The phase ends with the G1/S checkpoint. The process can stop at this point (see Cell cycle control, below), or proceed to S phase in which DNA and associated nuclear proteins are replicated. At the end of S phase the cell contains two full copies of its genetic information. It proceeds to G2 phase when the chromosomes begin to condense and structures required for division form. A distinct band of microtubules (Topic B1), the pre-prophase band, forms around the cytoplasm in a ring where the edge of the phragmosome lay, again predicting the plane of cell division. At the end of G2, the cell has to pass another checkpoint (G2/M) at which stage, if conditions are suitable, it enters M phase in which the cell divides.
Stages G1 to G 2 are known as interphase. M phase, when division occurs, can be divided into a series of stages that can be recognized by microscopy (Fig. 2 and Table 1).
In meristems (Topic C1), a population of cells characterized by thin cell walls and the lack of large vacuoles are constantly dividing. The daughter cells may undergo a few further divisions, but then lose the capacity to divide and after a phase of cell enlargement generally develop large vacuoles.Plant hormones,
auxin and cytokinin (Topic F2), are known to initiate the cell cycle. Auxin stimulates DNA replication, while cytokinin initiates the events of mitosis. The cell cycle is also controlled by the activity of cell proteins called cyclins and cyclindependentprotein kinases (CDPKs; a kinase is an enzyme which will phosphorylate another protein). One group of cyclins, the G1 cyclins, are manufactured by the cell in G1 and activate CDPKs which stimulate DNA synthesis at the G1/S control point. If sufficient G1 cyclins are not formed, the cell will not progress to S. Having passed this point, the G1 cyclins are degraded and a new family of cyclins, the mitotic or M cyclins are produced. These activate a second set of CDPKs which permit the cell to pass the G2/M control point into mitosis (Fig. 3).
Whereas animal cells which pass G1/S are committed to undergo division, plant cells are not. This means that many plant cells continue to replicate DNA without dividing. This is known as endoreduplication, which is shown by more than 80% of all plant cells and particularly cells with a high metabolic activity and requirement for protein synthesis.
Meiosis occurs in the reproductive tissues of the plant. To do so, it must result in a halving of the number of chromosomes, so that each cell has only one set (haploid, rather than the usual two sets, diploid) of chromosomes. The full complement of chromosomes is restored after fertilization, when the two sets (one from each gamete) combine. In interphase, DNA synthesis occurs and each chromosome exists as a pair of sister chromatids joined by a centromere. In prophase I, the homologous chromosomes (originally from the maternal and paternal generative cells) pair up to give asynaptonemal complex. Each chromosome can be seen to be composed of two chromatids. The chromatids join at points called chiasmata, at which genetic material can cross over from one chromatid to another. This can be between homologous chromatids or between sister chromatids. Inmetaphase I, the paired chromosomes move together at the metaphase plate. In anaphase I, homologous chromosomes, each with its two chromatids, separate to the spindle poles, drawn by microtubules (Topic B1). The daughter nuclei now have a haploid set of chromosomes. Each chromosomehas two chromatids (compare mitosis, where at this stage the chromatids separate so each chromosome has only one). Instead of forming new nuclei and stopping division, the cells go on to another phase of division.
In metaphase II, a new metaphase plate forms in daughter cells and the chromosomes line up at the equator of the cell. In anaphase II, chromatids separate and move to the poles. By telophase II, the chromosomes have completed movement and four new nuclei, each having half the original number of single chromosomes, have been formed.
As only one period of chromosome duplication has occurred, the result is four haploid cells. In pollen formation, all four cells survive; in ovule formation, three normally abort leaving one to form the ovule. The stages are shown
diagramatically in Fig. 4.
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. |
Charter colony is one of the three classes of colonial government established in the 17th century English colonies in North America, the other classes being proprietary colony and royal colony. The colonies of Rhode Island, Connecticut, and Massachusetts Bay were charter colonies. In a charter colony, the King granted a charter to the colonial government establishing the rules under which the colony was to be governed. The charters of Rhode Island and Connecticut granted the colonists significantly more political liberty than other colonies. Rhode Island and Connecticut continued to use their colonial charters as their State constitutions after the American Revolution.
Rhode Island's permanent settlement by European colonists began in 1636 when a group of refugees from the Massachusetts Bay Colony left the colony to seek freedom of worship. Roger Williams, the unofficial head of the group of refugees, acquired land from Native Americans and established the town of Providence. Other early towns settled in the Rhode Island area were Portsmouth (1638), Newport (1639), and Warwick (1642). The lands on which these original four towns were settled were held only through Indian deeds, so naturally they caught the attention of nearby colonies. In order to protect the small haven that the town had established, Roger Williams acquired a parliamentary patent from England between the years 1643 and 1644. In the early 1660s, John Clarke was given the task of getting from King Charles II a charter that would both protect the colony from surrounding larger colonies and preserve the religious ideals that had been present with the colony since its beginning. The charter that the colony received was the royal charter of 1663. This charter, said to be one of the most liberal of the colonial era, not only granted the religious freedom that the colony sought, but also allowed Rhode Island to have local autonomy and gave the colony a much tighter grip on its territory.
A royal charter was not granted for Connecticut until 1662. The charter was proposed by John Winthrop and granted by Charles II. Up to that date, the people of Connecticut had only negotiated titles for ownership with the Indians, otherwise having no real titles on the Connecticut soil. The only restrictions limiting the newly appointed charter's independent powers were, like other royal charters, the boundaries set by English law. While Connecticut power's had the ability to create new laws, they were to not exceed the limits or contradict with the rules set place by English government. Attempting to absorb the New Haven Colony created tensions due to the colony's resistance to any attempted control by Connecticut. Only after the perceived threat of absorption by New York was realized, did New Haven give in and agree to merge with Connecticut (though not before losing many people wishing to maintain independence from Connecticut rule, through migration to New Jersey.) Connecticut was not free from the control of England through the royal charter until after the conflict with King James II. Even after the conflict with England, Connecticut was still able to retain a liberal charter from England.
Massachusetts Bay Colony
In 1628, a Puritan group of well-rounded businessmen created the Governor and Company of Massachusetts Bay to be a profitable investment in the colonies. The Council of New England authorized a land grant, allowing the company rights to the area between the Charles and Merrimack rivers to the Pacific Ocean westward. Seeking surplus protection for their endeavor, the Puritans requested and were granted a charter from England. In 1629, the businessmen undertaking the New World endeavor signed the Cambridge Agreement, agreeing to accomplish the Atlantic voyage for complete authority over the charter and the colony. The power transfer was an influential step to creating a theocratic Massachusetts. Political power was held by the staunch Puritanical fellow believers. In 1684, the royal charter was taken away, splitting the Massachusetts Bay Company and the colony. In 1691, Plymouth Colony and Maine were absorbed in a new royal charter.
- ^ Putney, Albert H. (1908). "Popular Law Library Vol 1 Introduction To The Study Of Law Legal History". Cree Publishing Company. http://chestofbooks.com/society/law/Popular-Law-1-Legal-History/Section-81-The-Thirteen-Colonies.html. Retrieved September 3, 2010.
- ^ a b Cornwell, Elmer (2007). "Rhode Island History". Rhode Island Manual. State of Rhode Island General Assembly. http://www.rilin.state.ri.us/rhodeislandhistory/rodehist.html. Retrieved August 28, 2011.
- ^ a b c Elson, William Henry (1904). "Connecticut Colonial History". History of the United States of America. The MacMillan Company. http://www.usahistory.info/New-England/Connecticut.html. Retrieved September 3, 2010.
- ^ a b c "Massachusetts Bay Colony". United States History. Online Highways. http://www.u-s-history.com/pages/h572.html. Retrieved September 3, 2010.
Wikimedia Foundation. 2010.
Look at other dictionaries:
charter colony — noun Usage: often capitalized 1st C : one of the three British colonies in America (Massachusetts, Connecticut, and Rhode Island) governed by royal charter without direct interference from the crown compare proprietary colony, royal colony * * *… … Useful english dictionary
charter colony — Amer. Hist. a colony, as Virginia, Massachusetts, Connecticut, or Rhode Island, chartered to an individual, trading company, etc., by the British crown. Cf. royal colony (def. 2). [1760 70] * * * … Universalium
Charter — For other uses, see Charter (disambiguation). An example of a charter (Magna Carta). A charter is the grant of authority or rights, stating that the granter formally recognizes the prerogative of the recipient to exercise the rights specified. It … Wikipedia
Colony of Virginia — British colony 1607–1776 … Wikipedia
Colony of Rhode Island and Providence Plantations — 1636–1776 … Wikipedia
Charter of Freedoms and Exemptions — First page of the Charter of Freedoms and Exemptions … Wikipedia
Colony Brands — Colony Brands, Inc. (formerly, The Swiss Colony, Inc.) is a mail order and electronic retail company known for its cheese, sausage, chocolate, fruitcakes, and other food products. The company also features extensive offerings in furniture, home… … Wikipedia
Charter Oak — bezeichnet: Charter Oak (Baum), eine Weiß Eiche, die der Legende nach als Versteck für die staatliche Urkunde der Colony of Connecticut verwendet wurde Charter Oak State College, ein Liberal Arts College Orte in den Vereinigten Staaten: Charter… … Deutsch Wikipedia
Colony of Connecticut — Eine Karte von den Kolonien Connecticut, New Haven und Saybrook. Die Karte zeigt Connecticuts westliche … Deutsch Wikipedia
Charter Oak — For other uses, see Charter Oak (disambiguation). The Charter Oak, oil on canvas, Charles De Wolf Brownell, 1857. Wadsworth Atheneum The Charter Oak was an unusually large white oak tree growing, from around the 12th or 13th century until 1856,… … Wikipedia |
Benjamin Franklin, Postmaster General
It just makes sense that Franklin is the subject of one of the first US stamps, since he was the first Postmaster General back in 1775. The portrait is based on work done by James B. Longacre, who is known for designing the Indian Head penny.
I don’t own a copy of this stamp. It only appears here thanks to the Wikimedia Commons and is in the public domain.
How Was the Franklin Stamp Used?
When prepaid postal services in America began, the cost depended on weight and the distance the correspondence was to travel. 300 miles was the cutoff point. A letter weighing less than an ounce that you wanted to send less than 300 miles cost 5 cents. If your letter exceeded that weight or needed to travel beyond the cutoff, you had to pay 10 cents.
As the efficiency of the postal system increased, Congress lowered postal rates. For a time, the 5-cent stamp and the 10-cent Washington were no longer needed, so Congress declared them invalid for postal use in 1851.
Franklin’s Postal Career in Philadelphia
Twenty-eight years before Benjamin Franklin became the first (United States) Postmaster General, the British Crown Post appointed him as postmaster of Philadelphia, at the age of 31.
Franklin was a printer by trade and had been publishing a newspaper, The Pennsylvania Gazette, in Philadelphia for 9 years (since 1728). Newspaper publishers were often postmasters. This dual role helped them gather and distribute news.
In 1753, Franklin succeeded William Hunter as Postmaster General of the Crown. During his many years of service to the Crown, he surveyed many hundreds of miles of roads in search of the best routes for deliverymen. On some routes, he required carriers to travel both day and night to improve delivery times.
Franklin also implemented a simple accounting method for his postmasters and recommended that they establish a penny post; that is, charge a penny for delivery of mail that was not picked up at the post office.
Franklin served the Crown as Postmaster General until 1774 when he was dismissed for his alliance with the American revolution.
Franklin Made Postmaster General by Congressional Appointment
Mr. Franklin was appointed Postmaster General of the colonies by the Second Continental Congress, a position he held from July 26, 1775 to November 7, 1776.
He was succeeded by Richard Bache who held the position for just over five years.
Franklin’s likeness is found on more U.S. stamps than anyone but George Washington’s.
|Benjamin Franklin 5c Statistics|
|7/7/1847||Asher B. Durand||RWH&E||3,712,000| |
New technology has made it easier for bullies to reach their victims. If you think your child is being affected, use our action checklist for advice on how to support and protect your child from cyberbullying.
What is cyberbullying?
If your child has a smartphone or gaming console, uses social networking sites or instant messenger programs, or simply has their own email address, they could become the target of a cyberbully. This might mean they receive abusive emails, texts, or comments on Facebook—or that images or videos of them are circulated online without their consent.
Cyberbullying is on the rise. Since January 2009, the UK charity Family Lives has seen calls to its bullying helpline increase by 13%, while calls specifically about cyberbullying have soared by 77%. Appearance is a common catalyst for cyberbullying attacks—and girls experience it twice as much as boys, according to “The protection of children online: a brief scoping review to identify vulnerable groups,” published by the Child Wellbeing Research Center.
Cyberbullies often focus on looks
Many forms of cyberbullying focus on how young people’s clothes, hair, and body look in the pictures and videos they post online. Being the target of persistent teasing about their appearance can have a detrimental impact on a young person’s self-esteem. If it starts to impact your child’s life choices—from the clothes they wear to the pictures they’re willing to share—then take action.
Talk with your child about the situation, decide on actions to resolve the problem together, and help develop online behavior to protect them from cyberbullies. Much of their life will be conducted online or via their smartphone, so developing protective strategies to deal with online criticism or bullying is important for lifelong self-esteem. |
Connectives and Truth Values
- In propositional logic we use symbols to stand for the relationships between statements—that is, to indicate the form of an argument. These relationships are made possible by logical connectives such as conjunction (and), disjunction (or), negation (not), and conditional (If . . . then . . .). Connectives are used in compound statements, each of which is composed of at least two simple statements. A statement is a sentence that can be either true or false.
- To indicate the possible truth values of statements and arguments, we can construct truth tables, a graphic way of displaying all the truth value possibilities.
- A conjunction is false if at least one of its statement components (conjuncts) is false. A disjunction is still true even if one of its component statements (disjuncts) is false. A negation is the denial of a statement. The negation of any statement changes the statement’s truth value to its contradictory (false to true and true to false). A conditional statement is false in only one situation—when the antecedent is true and the consequent is false.
Checking for Validity
- The use of truth tables to determine the validity of an argument is based on the fact that it’s impossible for a valid argument to have true premises and a false conclusion. A basic truth table consists of two or more guide columns listing all the truth value possibilities, followed by a column for each premise and the conclusion. We can add other columns to help us determine the truth values of components of the argument.
- You can check the validity of arguments not only with truth tables but also with the short method. In this procedure we try to discover if there is a way to make the conclusion false and the premises true by assigning various truth values to the argument’s components.
Proof of Validity
· The method of proof is a way to confirm the validity of an argument by deducing its conclusion from its premises using simple, valid argument forms. Most valid complex arguments consist of several of these valid sub-arguments (most of which you may already know). Determining the validity of the larger argument then is a matter of moving step by step from premises to conclusion, identifying the valid, component arguments along the way.
· The method of proof uses nine rules of inference and nine rules of replacement. By properly applying them, you can confirm an argument’s validity. |
Established in 1970, Earth Day has brought attention to the many issues impacting the environment for decades. The Earth Day Network estimates that over one billion people in 192 countries will take part in Earth Day celebrations today, participating in political action and civic engagement to protect the environment. This year, the theme of Earth Day is Protect Our Species.
The goal of this year’s Earth Day campaign is to emphasize the accelerating rates of extinction many species face around the globe while encouraging individuals to take action and governments to develop policies to address the issue. Human activity is directly linked to many causes of species extinction, such as habitat loss, pesticides, and pollution. Light pollution also has significant impacts on wildlife, as it exacerbates the stresses many species already experience. Learn more about how each of these species is impacted by light pollution:
Insects are very sensitive to artificial light at night (ALAN) which can affect their circadian rhythms, mating rituals, and ability to navigate. Studies have shown that lights at night impact nocturnal pollinators, including many species of moths. With severe declines in insect populations measured globally in the last few decades, this group is of particular concern, especially considering their place in the food web and their important role in pollinating many of the food crops integral to human diets.
Birds are another group facing the threat of global extinction. An article in the online magazine Yale Environment 360 reports that “40 percent of the world’s 11,000 bird species are in decline.” Some of the reasons for these declines are also related to light pollution. Studies have found that artificial light affects disease transmission in birds, birds’ ability to navigate during migration, and can also cause fatal collisions with buildings. Birds, like insects, play important roles in their ecosystems, vital in both agricultural production and nutrient cycling in natural systems.
Fish also play an important role in aquatic environments. They help make nutrients available to species lower in their food chains and make a significant contribution to human diets all over the world. Unfortunately, fish species are also being impacted by disappearing habitats and pollution, and some species of fish have been shown to be especially sensitive to light pollution. Studies on salmon have shown that ALAN affects both their susceptibility to predation and migratory behavior.
Of all the species threatened by shrinking habitats and light pollution, sea turtles are among the most at risk for extinction. Several turtle species are listed as either threatened or endangered in the U.S. and by international organizations, with populations rapidly declining over the past 100 years. Sea turtles are particularly impacted by light pollution, as their hatchlings often become confused by the bright lights of cities as they attempt to navigate from their nests on land to their home in the ocean.
YOU Can Make a Difference
According to research scientist Christopher Kyba, for nocturnal animals, “the introduction of artificial light probably represents the most drastic change human beings have made to their environment.” While this may be a challenging reality to digest, there is a silver lining. Unlike other forms of pollution, light pollution is reversible and we can all be a part of the solution. Here’s how you can help:
- Educate others about light pollution by sharing this article or requesting free IDA outreach brochures.
- Protect the natural nighttime environment for insects, birds, fish, and turtles by addressing the lighting around your home or business. Make sure you are only using lighting when and where you need it, and replace necessary lighting with IDA dark sky approved fixtures.
- Help the International Dark-Sky Association continue protecting wildlife, the environment, human health,
andour cultural heritage by becoming an IDA Member.
What better day than Earth Day to join the movement to protect our species from light pollution? |
We discussed diffraction in PY105 when we talked about sound waves; diffraction is the bending of waves that occurs when a wave passes through a single narrow opening. The analysis of the resulting diffraction pattern from a single slit is similar to what we did for the double slit. With the double slit, each slit acted as an emitter of waves, and these waves interfered with each other. For the single slit, each part of the slit can be thought of as an emitter of waves, and all these waves interfere to produce the interference pattern we call the diffraction pattern.
After we do the analysis, we'll find that the equation that gives the angles at which fringes appear for a single slit is very similar to the one for the double slit, one obvious difference being that the slit width (W) is used in place of d, the distance between slits. A big difference between the single and double slits, however, is that the equation that gives the bright fringes for the double slit gives dark fringes for the single slit.
To see why this is, consider the diagram below, showing light going away from the slit in one particular direction.
In the diagram above, let's say that the light leaving the edge of the slit (ray 1) arrives at the screen half a wavelength out of phase with the light leaving the middle of the slit (ray 5). These two rays would interfere destructively, as would rays 2 and 6, 3 and 7, and 4 and 8. In other words, the light from one half of the opening cancels out the light from the other half. The rays are half a wavelength out of phase because of the extra path length traveled by one ray; in this case that extra distance is :
The factors of 2 cancel, leaving:
The argument can be extended to show that :
The bright fringes fall between the dark ones, with the central bright fringe being twice as wide, and considerably brighter, than the rest.
Note that diffraction can be observed in a double-slit interference pattern. Essentially, this is because each slit emits a diffraction pattern, and the diffraction patterns interfere with each other. The shape of the diffraction pattern is determined by the width (W) of the slits, while the shape of the interference pattern is determined by d, the distance between the slits. If W is much larger than d, the pattern will be dominated by interference effects; if W and d are about the same size the two effects will contribute equally to the fringe pattern. Generally what you see is a fringe pattern that has missing interference fringes; these fall at places where dark fringes occur in the diffraction pattern.
We've talked about what happens when light encounters a single slit (diffraction) and what happens when light hits a double slit (interference); what happens when light encounters an entire array of identical, equally-spaced slits? Such an array is known as a diffraction grating. The name is a bit misleading, because the structure in the pattern observed is dominated by interference effects.
With a double slit, the interference pattern is made up of wide peaks where constructive interference takes place. As more slits are added, the peaks in the pattern become sharper and narrower. With a large number of slits, the peaks are very sharp. The positions of the peaks, which come from the constructive interference between light coming from each slit, are found at the same angles as the peaks for the double slit; only the sharpness is affected.
Why is the pattern much sharper? In the double slit, between each peak of constructive interference is a single location where destructive interference takes place. Between the central peak (m = 0) and the next one (m = 1), there is a place where one wave travels 1/2 a wavelength further than the other, and that's where destructive interference takes place. For three slits, however, there are two places where destructive interference takes place. One is located at the point where the path lengths differ by 1/3 of a wavelength, while the other is at the place where the path lengths differ by 2/3 of a wavelength. For 4 slits, there are three places, for 5 slits there are four places, etc. Completely constructive interference, however, takes place only when the path lengths differ by an integral number of wavelengths. For a diffraction grating, then, with a large number of slits, the pattern is sharp because of all the destructive interference taking place between the bright peaks where constructive interference takes place.
Diffraction gratings, like prisms, disperse white light into individual colors. If the grating spacing (d, the distance between slits) is known and careful measurements are made of the angles at which light of a particular color occurs in the interference pattern, the wavelength of the light can be calculated.
Interference between light waves is the reason that thin films, such as soap bubbles, show colorful patterns. This is known as thin-film interference, because it is the interference of light waves reflecting off the top surface of a film with the waves reflecting from the bottom surface. To obtain a nice colored pattern, the thickness of the film has to be similar to the wavelength of light.
An important consideration in determining whether these waves interfere constructively or destructively is the fact that whenever light reflects off a surface of higher index of refraction, the wave is inverted. Peaks become troughs, and troughs become peaks. This is referred to as a 180° phase shift in the wave, but the easiest way to think of it is as an effective shift in the wave by half a wavelength.
Summarizing this, reflected waves experience a 180° phase shift (half a wavelength) when reflecting from a higher-n medium (n2 > n1), and no phase shift when reflecting from a medium of lower index of refraction (n2 < n1).
For completely constructive interference to occur, the two reflected waves must be shifted by an integer multiple of wavelengths relative to one another. This relative shift includes any phase shifts introduced by reflections off a higher-n medium, as well as the extra distance traveled by the wave that goes down and back through the film.
Note that one has to be very careful in dealing with the wavelength, because the wavelength depends on the index of refraction. Generally, in dealing with thin-film interference the key wavelength is the wavelength in the film itself. If the film has an index of refraction n, this wavelength is related to the wavelength in vacuum by:
Many people have trouble with thin-film interference problems. As usual, applying a systematic, step-by-step approach is best. The overall goal is to figure out the shift of the wave reflecting from one surface of the film relative to the wave that reflects off the other surface. Depending on the situation, this shift is set equal to the condition for constructive interference, or the condition for destructive interference.
Note that typical thin-film interference problems involve "normally-incident" light. The light rays are not drawn perpendicular to the interfaces on the diagram to make it easy to distinguish between the incident and reflected rays. In the discussion below it is assumed that the incident and reflected rays are perpendicular to the interfaces.
A good method for analyzing a thin-film problem involves these steps:
Step 1. Write down , the shift for the wave reflecting off the top surface of the film.
Step 2. Write down , the shift for the wave reflecting off the film's bottom surface.
One contribution to this shift comes from the extra distance travelled. If the film thickness is t, this wave goes down and back through the film, so its path length is longer by 2t. The other contribution to this shift can be either 0 or , depending on what happens when it reflects (this reflection occurs at point b on the diagram).
Step 3. Calculate the relative shift by subtracting the individual shifts.
Step 4. Set the relative shift equal to the condition for constructive interference, or the condition for destructive interference, depending on the situation. If a certain film looks red in reflected light, for instance, that means we have constructive interference for red light. If the film is dark, the light must be interfering destructively.
Step 5. Rearrange the equation (if necessary) to get all factors of on one side.
Step 6. Remember that the wavelength in your equation is the wavelength in the film itself. Since the film is medium 2 in the diagram above, we can label it . The wavelength in the film is related to the wavelength in vacuum by:
Step 7. Solve. Your equation should give you a relationship between t, the film thickness, and either the wavelength in vacuum or the wavelength in the film.
Working through an example is a good way to see how the step-by-step approach is applied. In this case, white light in air shines on an oil film that floats on water. When looking straight down at the film, the reflected light is red, with a wavelength of 636 nm. What is the minimum possible thickness of the film?
Step 1. Because oil has a higher index of refraction than air, the wave reflecting off the top surface of the film is shifted by half a wavelength.
Step 2. Because water has a lower index of refraction than oil, the wave reflecting off the bottom surface of the film does not have a half-wavelength shift, but it does travel the extra distance of 2t.
Step 3. The relative shift is thus:
Step 4. Now, is this constructive interference or destructive interference? Because the film looks red, there is constructive interference taking place for the red light.
Step 5. Moving all factors of the wavelength to the right side of the equation gives:
Note that this looks like an equation for destructive interference! It isn't, because we used the condition for constructive interference in step 4. It looks like a destructive interference equation only because one reflected wave experienced a shift.
Step 6. The wavelength in the equation above is the wavelength in the thin film. Writing the equation so this is obvious can be done in a couple of different ways:
Step 7. The equation can now be solved. In this situation, we are asked to find the minimum thickness of the film. This means choosing the minimum value of m, which in this case is m = 0. The question specified the wavelength of red light in vacuum, so:
This is not the only thickness that gives completely constructive interference for this wavelength. Others can be found by using m = 1, m = 2, etc. in the equation in step 6.
If 106 nm gives constructive interference for red light, what about the other colors? They are not completely cancelled out, because 106 nm is not the right thickness to give completely destructive interference for any wavelength in the visible spectrum. The other colors do not reflect as intensely as red light, so the film looks red.
The light reflecting off the top surface of the film does not pass through the film at all, so how can it be the wavelength in the film that is important in thin-film interference? A diagram can help clarify this. The diagram looks a little complicated at first glance, but it really is straightforward once you understand what it shows.
Figure A shows a wave incident on a thin film. Each half wavelength has been numbered, so we can keep track of it. Note that the thickness of the film is exactly half the wavelength of the wave when it is in the film.
Figure B shows the situation two periods later, after two complete wavelengths have encountered the film. Part of the wave is reflected off the top surface of the film; note that this reflected wave is flipped by 180°, so peaks are now troughs and troughs are now peaks. This is because the wave is reflecting off a higher-n medium.
Another part of the wave reflects off the bottom surface of the film. This does not flip the wave, because the reflection is from a lower-n medium. When this wave re-emerges into the first medium, it destructively interferes with the wave that reflects off the top surface. This occurs because the film thickness is exactly half the wavelength of the wave in the film. Because a half wavelength fits in the film, the peaks of one reflected wave line up precisely with the troughs of the other (and vice versa), so the waves cancel. Destructive interference would also occur with the film thickness being equal to 1 wavelength of the wave in the film, or 1.5 wavelengths, 2 wavelengths, etc.
If the thickness was 1/4, 3/4, 5/4, etc. the wavelength in the film, constructive interference occurs. This is only true when one of the reflected waves experiences a half wavelength shift (because of the relative sizes of the refractive indices). If neither wave, or both waves, experiences a shift of , there would be constructive interference whenever the film thickness was 0.5, 1, 1.5, 2, etc. wavelengths, and destructive interference if the film was 1/4, 3/4, 5/4, etc. of the wavelength in the film.
One final philosophical note, to really make your head spin if it isn't already. In the diagram above we drew the two reflected waves and saw how they cancelled out. This means none of the wave energy is reflected back into the first medium. Where does it go? It must all be transmitted into the third medium (that's the whole point of a non-reflective coating, to transmit as much light as possible through a lens). So, even though we did the analysis by drawing the waves reflecting back, in some sense they really don't reflect back at all, because all the light ends up in medium 3.
Destructive interference is exploited in making non-reflective coatings for lenses. The coating material generally has an index of refraction less than that of glass, so both reflected waves have a shift. A film thickness of 1/4 the wavelength in the film results in destructive interference (this is derived below)
For non-reflective coatings in a case like this, where the index of refraction of the coating is between the other two indices of refraction, the minimum film thickness can be found by applying the step-by-step approach:
Back to the lecture note home page |
In this farming community, farmers are working the land and keeping animals. Others are building boats and making pots from clay....Read More >>In this farming community, farmers are working the land and keeping animals. Others are building boats and making pots from clay. Women are sewing, weaving baskets, and preparing food. Europe's first inhabitants were hunter-gatherers, who hunted for wild animals and picked plants to eat. By around 7000–6500 BC, some people living in southeastern Europe had settled down to grow crops. It took a few thousand years for farming to spread across Europe. Although people still hunted wild animals and fished in the rivers, they usually lived in simple farming communities, often close to water. They cut down trees to make room for fields. They could grow barley, wheat and other grains, as well as keep livestock. From around 4000 BC, some Europeans could make goods from metal.
Stonehenge as it may have looked soon after completion in 2200 BCBy about 2000 BC, people in Europe had begun to build huge stone monuments for religious worship. To build Stonehenge, which stands on Salisbury Plain in southern England, massive stones had to be dragged across the plain on rollers, placed in deep pits and then hauled upright.
The largest stone circle in Europe is at Avebury, in Wiltshire, England. It was built around 2600 BC and has a diameter of 331.6 m (1088 ft).
Find the answer |
In 1937, Roosevelt began to scale back deficit spending, because he believed that the worst of the Great Depression had passed and because he was receiving pressure from conservatives in Congress (and even from ardent New Dealers in his own cabinet). The size of the Works Progress Administration, for example, was severely reduced, as were agricultural subsidies.
This decision to cut back spending turned out to be premature, however, as the economy buckled again, resulting in what became known as the Roosevelt Recession. The stock market crashed for a second time in 1937, and the price of consumer goods dropped significantly. Contrary to conservative beliefs, the economy simply had not pulled far enough out of the depression to survive on its own. The embattled Roosevelt only made himself look worse by trying to place the blame on spendthrift business leaders. The American people were not convinced, and as a result, Democrats lost a significant number of seats in the House and Senate in the 1938 congressional elections. This return of Republican power effectively killed the New Deal.
Republicans in Congress further weakened Roosevelt’s executive powers with the Hatch Act of 1939. The act forbade most civil servants from participating in political campaigns and public office holders (i.e., Roosevelt and New Dealers) from using federal dollars to fund their reelection campaigns. The bill also made it illegal for Americans who received federal assistance to donate money to politicians. Conservatives hoped that these measures would divorce the functions of government from the campaign frenzy and ultimately dislodge entrenched New Dealers who preyed on a desperate public for votes.
Despite the numerous positive effects that the New Deal had, it failed to end the Great Depression. Millions of Americans were still hungry, homeless, and without jobs as late as December 1941, when the United States entered World War II. Many historians and economists have suggested that the New Deal would have been more successful if Roosevelt had put a greater amount of money into the economy, but this conclusion is debatable. Only after the surge in demand for war-related goods such as munitions, ships, tanks, and airplanes did the economy finally right itself and begin to grow.
The New Deal was a crucial turning point in the history of the U.S. government. Politics had never before been so involved in—or exerted more control over—the daily lives of regular Americans as it was during Roosevelt’s terms in office in the 1930s. Critics lamented that the United States had transformed itself into a welfare state. Indeed, the budget deficit increased dramatically every year, and the national debt more than doubled in just ten years.
However, the New Deal did in fact help millions of Americans survive the Great Depression. Unlike his predecessor, Herbert Hoover, Roosevelt tried to directly help as many people as the conservatives in Congress and the Supreme Court would allow him to. His New Deal legislation helped create new jobs, build houses and shelters for the homeless, and distribute food to the hungry. New Deal policy also raised agricultural commodity prices, put banks back on solid footing, and greatly improved the national infrastructure. Moreover, the New Deal created a number of long-standing government institutions, such as Social Security, that we still have today. |
The largest canal in the world, the Grand Canal of China wends its way through four provinces, beginning at Beijing and ending at Hangzhou. It ties together two of the greatest rivers in the world – the Yangtze River and the Yellow River – as well as smaller waterways such as the Hai River, the Qiantang River, and the Huai River.
Just as impressive as its incredible size, however, is the Grand Canal’s remarkable age.
The first section of the canal likely dates back to the 6th century BCE, although Chinese historian Sima Qian claimed that it went back 1,500 years earlier than that to the time of the legendary Yu the Great of the Xia Dynasty. In any case, the earliest section links the Yellow River to the Si and Bian Rivers in Henan Province. It is known poetically as the “Canal of the Flying Geese,” or more prosaically as “Far-Flung Canal.”
Another early section of the Grand Canal was created under the direction of King Fuchai of Wu, who ruled from 495 to 473 BCE. This early portion is known as the Han Gou, or “Han Conduit,” and connects the Yangtze River with the Huai River.
Fuchai’s reign coincides with the end of the Spring and Autumn Period, and the beginning of the Warring States period, which would seem to be an inauspicious time to take on such a huge project. However, despite the political turmoil, that era saw the creation of several major irrigation and waterworks projects, including the Dujiangyan Irrigation System in Sichuan, the Zhengguo Canal in Shaanxi Province, and the Lingqu Canal in Guangxi Province.
The Grand Canal itself was combined into one great waterway during the reign of the Sui Dynasty, 581 – 618 CE. In its finished state, the Grand Canal stretches 1,104 miles (1,776 kilometers) and runs north to south roughly parallel to the east coast of China. The Sui used the labor of 5 million of their subjects, both men and women, to dig the canal, finishing work in 605 CE.
The Sui rulers sought to connect northern and southern China directly so that they could ship grain between the two regions. This helped them to overcome local crop failures and famine, as well as supplying their armies that were stationed far from their southern bases. The path along the canal also served as an imperial highway, and post offices set all along the way served the imperial courier system.
By the Tang Dynasty era (618 – 907 CE), more than 150,000 tons of grain traveled the Grand Canal annually, most of it tax payments from southern peasants moving to the capital cities of the north. However, the Grand Canal could pose a danger as well as a benefit to the people who lived beside it. In the year 858, a terrible flood spilled into the canal, and drowned thousands of acres across the North China Plain, killing tens of thousands. This catastrophe represented a huge blow to the Tang, already weakened by the An Shi Rebellion. The flooding canal seemed to suggest that the Tang Dynasty had lost the Mandate of Heaven, and needed to be replaced.
To prevent the grain barges from running aground (and then being robbed of their tax grain by local bandits), the Song Dynasty assistant commissioner of transport Qiao Weiyue invented the world’s first system of pound locks.
These devices would raise the level of the water in a section of the canal, to safely float barges past obstacles in the canal.
During the Jin-Song Wars, the Song dynasty in 1128 destroyed part of the Grand Canal to block the Jin military’s advance. The canal was only repaired in the 1280s by the Mongol Yuan Dynasty, which moved the capital to Beijing and shortened the total length of the canal by about 450 miles (700 km).
Both the Ming (1368 – 1644) and the Qing (1644 – 1911) Dynasties maintained the Grand Canal in working order. It took literally tens of thousands of laborers to keep the whole system dredged and functional each year; operating the grain barges required an additional 120,000 plus soldiers.
In 1855, disaster struck the Grand Canal. The Yellow River flooded and jumped its banks, changing its course and cutting itself off from the canal.
The waning power of the Qing Dynasty decided not to repair the damage, and the canal is still not entirely recovered. However, the People’s Republic of China, founded in 1949, has invested heavily in repairing and reconstructing damaged and neglected sections of the canal.
In 2014, UNESCO listed the Grand Canal of China as a World Heritage Site. Although much of the historic canal is visible, and many sections are popular tourist destinations, currently only the portion between Hangzhou, Zhejiang Province and Jining, Shandong Province is navigable. That is a distance of about 500 miles (800 kilometers). |
Which Words or Phrases Are Ambiguous?
- Course Objectives:
The course will enable students to:
- Analyze the structure of an argument.
- Understand the way language is used to influence thinking, feelings and behavior.
- Identify ambiguities, assumptions, values, and fallacies in reasoning.
When something is ambiguous, it means that there could be more than one meaning. Ambiguous means that something could be uncertain or indefinite.
At this point, you should be able to identify the issue, the conclusion, and the reasons, which are the structural elements of an argument. Once you have determined that an argument is structurally complete, you can begin to evaluate the quality of its content. In order to begin to evaluate the quality of an argument you must make certain that you do understand the content. You must be sure that you understand the meaning of the elements of the reasoning structure.
Content, context, or meaning of the key terms and phrases associated with the issue, conclusion, and reasons of an argument is crucial in understanding the argument itself. The acceptability or value of the communicator’s reasoning is completely dependant upon the interpretation of key words and phrases.
Did you ever miss the point that someone was communicating to you? Did you ever have to ask for clarification? If this has happened to you, and you know it has, the problem was probably because of ambiguity.
Read the following passage once, without stopping, and let us see what happens.
The boy’s arrows were nearly gone so they sat down on the grass and stopped hunting. Over at the edge of the woods they saw Henry making a small bow to a girl who was coming down the road. She had tears in her dress and tears in her eyes. She gave Henry a note, which he brought over to the group of young hunters. Read to the boys, it caused great excitement. After a minute, but rapid examination of their weapons, they ran down to the valley. Does were standing at the edge of the lake, making an excellent target. (Source unknown.)
Confusing Flexibility of Words
No pun intended but the above passage should give you pause to think and that is the point. We have a tendency to both speed-read and to assume that the meanings of the words that we encounter are obvious.
In some instances, words are spelled the same way but have different pronunciations and different meanings. (Tears in eyes, tears in dress). In some instances, words are spelled the same, pronounced the same but have a different meaning dependant upon context. (Jam the door, jam on bread).
Abstractness can also lead to ambiguity. Consider the word obscenity. American society spent a good portion of the second half of the twentieth century attempting to define this word. With the changing of societal values came a challenge to what was deemed to be obscene. United States Supreme Court Justice Thurgood Marshall noted that one man’s art is another man’s pornography. When asked if he could define obscene material Justice Marshall was alleged to reply, “I know it when I see it.”
Loaded language, and its intentional use, is another contributing factor, which leads to ambiguity. Loaded language is a favorite tool of politicians and advertisers. No politician will tell you that they are going to raise your taxes, but if they want to increase “revenue enhancements” then that is what their going to do. A car dealer will tell you that there is no money down and sixty “easy” monthly payments of $350.00. When was the last time you “easily” spent $350.00 on a regular basis?
You should now be aware that not all words have a single meaning. If that were the case, communication would be highly effective and there would be no misunderstandings. Take your time when you listen or when you read and make sure that you do understand the content and context.
TIPS FOR FINDING AMBIGUITIES
*Check the Issue for Key Terms
Once you have identified the issue you should check it for key terms or phrases. Ask yourself if you truly understand the meaning. Could another meaning be substituted which would change the nature of the argument?
*Check the Reasons and the Conclusion for Key Terms
Do you understand the meanings of the key terms and phrases in the reasoning structure? Could another meaning be substituted?
*Check for Abstract Words and Phrases
The more abstract a word, the more meanings you can discover. Be wary of encompassing words, words that can have multiple meanings.
*Use Reverse Role Play
Play devil’s advocate. If you were opposed to the author, how might you define key terms and phrases?
Ambiguity and the Sources of Meaning
Meanings of words usually come in one of three forms: synonyms, examples, and definition by specific criteria. Searching for these sources of meaning while looking for ambiguities can make the process easier. You want to find the most likely meaning of a word when you identify a key term or phrase and knowing the sources of meaning will benefit you.
Watch television, listen to the radio or read a newspaper and you are very likely to encounter a statistic. Most statistics seem very impressive, such as the average American household is comprised of 4.6 people or that the use of seat-belts have reduced the number of motor vehicle accident fatalities by 32%. While statistics appear to seem incredibly precise they are based on information that usually proves too inadequate for precision.
Professor Hans L. Trefousse, Professor of American History at the City University of New York, always used an interesting metaphor whenever someone presented him with statistical information. He likened statistics to a skimpy bathing suit, because they can show something that is quite revealing, yet hide the essentials. His metaphor is amusing and true. Whenever we encounter numbers, or see a percentage sign, we have a tendency to immediately accept these statistics as fact. When we do this we make the giant assumption that the person who created the statistics, or the person using them, is not mistaken or misleading.
Let us pretend that St. Joseph’s College wants to come up with a new advertising campaign to increase the enrollment of the School of Adult and Professional Education. The college contracts an independent research firm to poll recent St. Joseph’s graduates and recent graduates from the other private colleges in Brooklyn. The result of the poll becomes the new advertising campaign.
The major discovery of the poll was that graduates of the School of Adult and Professional Education at St. Joseph’s College tended to earn more money than the graduates from other colleges. The average household income for a St. Joseph’s graduate was $72,000 a year as opposed to only $31,000 for graduates from the other private colleges. Using this data the School of Adult and Professional Education created a new recruitment campaign whose slogan was “If you want to succeed come to SJC.”
The statistics seem to be impressive. It would appear that if you enroll at St. Joseph’s College you are destined to increase your salary. However, there are quite a number questions that need to be asked before you can accept the validity of the statistics. These questions focus on the methodology of the statistics.
How was the survey done? How many people were surveyed? How many people responded? How certain can you be that the respondents were truthful or accurate? The answers to these questions are required in order for you to judge the value of the statistics.
Assume that 180 St. Joseph’s graduates were surveyed and 18 responded. That is only a 10% response rate. That low of a response rate can have a significant impact on the value of the statistics. Out of the 10% that responded there is no way to
independently verify their responses.
Now you can begin to appreciate the importance of understanding the methodology of a statistic. If you do not know the source of a statistic, nor how was it gathered, it should be rejected.
1. Ambiguity is a word, or phrase that has two or more possible different meaning. I searched the text to find words and/or phrases that could sound like one thing but mean another. In this article, the author had an agenda that he pushed. He wants this audience to believe that the death penalty isn’t a bad thing, when used for heinous crimes. So I had to carefully read through the text as I know he was using words and phrases to ensure that he got his point across. One of the phrases “We must fight fire with fire “stood out to me. This phrase could mean several things. While he wants people to think of death penalty in a good light, however one could say that he wants to kill those who kill others. Personally I do not believe that this would be something that morally would be appropriate. For someone who is a Christian, I believe that “vengeance is the lord”, and not my own. So to kill someone who killed someone else, wouldn’t be a narrative that I would subscribe to. The second phrase that stood out to m “Life is precious and the death penalty just reaffirms that fact” sounds contradictory. If life is precious, then why would one want to implement something that intends to kill others. Wouldn’t the lives of everyone be precious, not just those of that would have been killed by heinous crimes?
EXAMPLE OF REPLY
When I think of “ambiguity”, politicians always come to mind. They have such an important role of leading the country and communicating to the public on important topics. But they often use general blanket statements, or cite problems without providing the data to back them up. I guess someone has to keep political fact checkers employed!! But it makes me worried that we’re rewarding those public figures for playing on our trends and emotions – pushing for what “sells”, rather than what is factually accurate. The death penalty also boggles my mind a bit. How can anyone decide that someone should be put to death? I understand “an eye for an eye”, but we can rarely give someone a punishment that is equal to the crime they committed.
2. Can you refute them with quality evidence?
“It is fallacy to argue that the death penalty should be abolished because an innocent person might die”, this is an ambiguous phrase because the author is stating both side but does not have concrete evidence for either side. His response “I’m not the one who personally killed your son, your father, your brother. I am innocent”, plays both sides with no concrete evidence of him being innocent at all. Him saying “im am not one who personally killed” is an ambiguous statement in itself, he’s saying he is innocent but he’s not saying he’s not totally innocent. It is almost sounds like he was there but no evidence was stated. You can not refute the quality because there is not enough evidence to conclude these phrases. The article does not go into enough detail to show evidence for which side of the death penalty to agree on an what the fine details of the case that were not discussed.
Not a lot to say here. I think we all agree that there were many ambiguous statements that could be easily refuted with good evidence. While working on this assignment, I found myself thinking about how often I use ambiguous statements and why. When I really thought about it – I realized that I usually “felt” that my statement was true and could be proven – I just didn’t have the time or desire to find the data to really back it up. At work, I’ve been trying to do a better job of “managing up”…delivering information to my boss about what’s happening in the org, issues, risks, etc. I often complain about issues, but I don’t collect a lot of data to really show evidence that the problem exists. I need to do a better job at that. |
The New Student's Reference Work/Pigeon
Pigeon (pĭ′jŭn), a name for members of the dove family. There are about 300 species all over the world, being most abundant in the East Indies. Only two are found in the Eastern United States — the wild pigeon and the turtledove. The wild or passenger-pigeon is exceedingly rare. It formerly was very abundant, perching in the forests in such numbers as to break limbs of trees and covering a large territory in their daily flight in search of food. During migration they flew in such large flocks that it would sometimes require days for them to pass a particular point. They were nearly exterminated by wholesale slaughter. The bird is about 17 inches long, with large wings and a long, pointed tail. The male is bluish above, purple brownish-red below, more violet behind, with a black bill and yellow feet. It depended largely upon acorns and beechnuts for food, with occasional feasts on grain and berries. The turtle or mourning dove still is quite common. The long, soft, mournful note of the male during the nesting season is known to nearly everyone in the United States. It is a smaller bird than the passenger pigeon, being about 11¾ inches long. The upper parts are olive grayish brown, the neck iridescent, the breast pinkish and the belly buff; the outer tail-feathers are tipped with white. They nest in isolated pairs, and two broods are produced a year. There is a large number of domestic pigeons, all descended from a wild form generally believed to be the blue rock-pigeon; but there is reason to doubt this. Pigeon-breeding has been engaged in throughout Europe and eastern countries for centuries. It is a favorite pastime in the United States. A great range of variation has been produced by breeding. Some of the more conspicuous varieties are the fan-tail with large spreading tail; the pouter, with inflated breast; the tumbler; carrier; trumpeter; barb; and jacobin. Darwin made use of pigeons in observing the changes produced in animals under domestication, through the influence of artificial selection. The breed which is called the carrier pigeon, or the homing pigeon, is employed for carrying messages. There is no real distinction between doves and pigeons. |
NCERT TEXTBOOK QUESTIONS SOLVED
Q1.Read the passage given and answer the questions:
The harsh working conditions suffered by laboureres in Aghanbigha were an outcome of the combined effect ofthe economic power of the maliks as a class and their overwhelming power as members of a dominant caste. A significant aspect of the social power of the maliks was their ability to secure the intervention of various arms of the state to advance their interests. Thus, political factors decissively contributed to widening the gulf between the dominant class and the underclass.
(i)Why do you think the maliks were able to use the power of the state to advance their own interests?
(ii) Why did labourers have harsh working conditions?
(i) (a) The maliks being dominant caste were very powerful politically, economically and socially.
(b)Because of power they were able to use the power of state for their vested interests.
(c)They were successfully able to secure the intervention of various arms of the state for their own benefit.
(ii)The labour have been working under harsh conditions because being dalits, they were not allowed to own land and compelled to work in the lands of dominant caste people as a labourer.
Q2. What measures do you think the government has taken, or should take, to protect the rights of landless agricultural labourers and migrant workers?
Ans. Measures to protect the right of Landless:
•Abolition of bonded labour legally:
Bandhua mazdoor (bonded labourers) practice in U.P and Bihar, Halpati System in Gujarat and Jeeta System in Karnataka has been legally abolished by Government of India.
•Abolition of Zamindari System: The intermediaries between the peasants and the state were the Zamindars. The state very effectively and intensively passed legislation and this system was abolished.
• Abolition and regulation act for Tenancy:
These laws discouraged tenancy or ‘Batai’ system. In West Bengal and Kerala, where CPI systems government was in power the tenants got the land rights.
•Imposition of Land Ceiling Act:
According to this act the upper limit of land for an owner is being fixed. Because of this act to identify surplus land and redistribute among the landless became programme of the state. Binoba Bhave’s Bhoodan yojna instructed this legislation but there are many shortcomings in this act and should be taken care of.
•To improve the condition of landless people living in villages the state should take appropriate measures and this whole sector should be organised.
•The economic conditions of villages should be improved by the state. Villages should be well connected to the ties, job opportunities should be creaked in the villages. Education and health facilities as well as entertainment facilities should be developed in the villages to discourage migration. MANREGA is an effective measure in this duration.
•Consolidation of Land: Landowner farmers are given one or two bigger piece of land in lieu of their several scattered small fields. It may be done as voluntary consolidation or as compulsory consolidation. This can bring about lot of efficiencies in agriculture process as a farmer.
Q3. There are direct linkages between the situation of agricultural workers and then- lack of upward socio-economic mobility. Name some of them.
• Indian rural society is totally dependent on agriculture. It is the only source of their livelihood. Unfortunately it is unevenly distributed, not organised and many people of ruralSociety are landless.
•Indian rural society has patrilineal kinship system. According to legal system women are supposed to have an equal right of family property but actually it is simply on papers. Because of male dominance, they are deprived of their rights.
•Most of the people in villages are landless and for their livelihood they become agriculture workers. They are paid below the statutory minimum wages. Their job is not regular and employment is insecure. Mostly these agriculture workers work on daily wages.
•The tenants also have lower income because they have to pay a large amount of production to the landowner.
•The ownership of land or its total area determines the position of the farmers upward or downward mobility in his socio-economic system. Therefore the agrarian society can be understood in terms of its class structure which is structured through Caste system.
Although this is not always true. In rural society Brahmins are the dominant caste but they are not main landowners so they are part of rural society but fall outside the agrarian structure.These questions are based on Self-Study. Students should do them solves. |
To understand the legal research process, one must first understand how the legal system works. The United States has three systems of government, federal, state, and tribal, each of which has its own set of laws. Elizabeth Reese. The Other American Law. 73 Stan.L.Rev. 555 (2021) The four sources of federal and state law are (1) constitutions, (2) statutes and ordinances, (3) rules and regulations, and (4) case law. Beyond the information in this unit, we will not be discussing or studying any tribal law this year, but you should be aware that their laws come from these same four (4) sources, but also come from a fifth (5) source - customs and traditions.
Constitutions are foundational to our legal systems. Federal and state governmental authority flows from the United States Constitution and the state constitutions. Tribal powers are inherent rather than derived from the federal government and Tribal nations possess all powers of a sovereign government except as limited by lawful federal authority. The federal government has limited inherent tribal authority by treaty, by statute, and in some instances, by judge-made federal common law. Cohen's Handbook of Federal Indian Law. § 4.02. (Neil Jessup, ed, 2019). The United States Constitution is the supreme law of the land, and no laws, state or federal, may violate it. Each state has its own constitution. The laws of a particular state may not conflict with the constitution of that state or with the U.S. Constitution.
The Separation of Powers doctrine divides governmental authority among three branches: Legislative, Executive, and Judicial. Some tribes have adopted this three-branches of government system, but not all.
These three branches in federal and state governments are responsible for creating the remaining sources of law. Legislative branches enact laws called statutes and ordinances. Executive branches enforce and implement these laws through rules and regulations. Judicial branches interpret these laws, rules, and regulations by deciding cases and issuing written opinions.
Federal, state, and local legislative branches are each responsible for enacting their own laws.
The United States Congress enacts federal laws—also called federal statutes—that are compiled by subject in the United States Code. For example, in 1993, Congress enacted a statute called the Family and Medical Leave Act, which entitles an employee to twelve weeks of unpaid leave if she has a serious health condition.
The South Carolina General Assembly enacts state statutes that are arranged by subject in the South Carolina Code of Laws. For example, the South Carolina General Assembly passed a law giving family courts the authority to order absent parents to pay child support. That statute is part of the South Carolina's Children's Code.
Federal and state executive branches are responsible for enforcing and implementing federal or state laws.
This is accomplished when Congress or state legislative bodies give federal or state administrative agencies the power to write rules and regulations to implement and enforce the laws they pass.
For example, when the U.S. Congress passed the Family and Medical Leave Act, it gave the U.S. Department of Labor the responsibility and power to write regulations to enforce its provisions.
And, when the South Carolina General Assembly passed the South Carolina Children's Code, it gave the South Carolina Department of Social Services the task of writing regulations to serve as guidelines for family courts to use in calculating the amount of child support an absent parent is required to pay.
Federal and state court systems interpret federal and state statutes and regulations and determine their constitutionality and the applicability of the law to the facts.
For example, the federal court in Miller v. AT&T Corporation, 250 F.3d 820 (4th Cir. 2001), interpreted (1) a federal statute enacted by Congress, known as the Family and Medical Leave Act, and (2) related federal regulations created by the Department of Labor.
The South Carolina Supreme Court in Arnal v. Arnal, 371 S.C. 10, 636 S.E.2d 864 (2006), interpreted (1) a collection of state statutes known as the South Carolina Children's Code, and (2) related state regulations issued by the South Carolina Department of Social Services.
Federal and state courts create case law in their respective jurisdictions in three ways. First, court decisions can create law by interpreting ambiguous language in constitutions, statutes, and regulations.
Second, court decisions can announce new principles of law where no statute or regulation applies, creating a body of judge-made law called common law, a subset of case law. For example, the elements required to recover for claims such as negligence or fraud in South Carolina are determined by case law, not statutes. This means negligence and fraud are common-law claims.
Third, court decisions create case law by applying the law (whether common law, constitutional law, statutory law, or regulatory law) to new situations. Even when a court is not clarifying legal language or adding a new principle of common law, each new judicial opinion adds to the body of case law by providing an example of how the law applies to an individual case.
The principle of stare decisis, which is Latin for “let the decision stand,” means that judges follow prior court decisions when deciding cases that come before them with the same legal issue, meaning (1) governed by the same law and (2) having similar facts. A binding prior court decision is known as precedent.
Stare decisis allows lawyers to advise their clients by predicting the likely outcome of a legal dispute or a proposed action. If the lawyer has researched prior court decisions on the same legal issue, meaning (1) governed by the same law and (2) having similar facts, the lawyer can predict that the outcome may be the same if a court addresses the client's legal dispute or proposed action. |
They contain their own DNA and can inject it inside human cells. In fact all viruses need to do this in order to “survive”. A virus is just made of genetic material packaged in a capsule. It needs to infect a host in order to reproduce.
It is this feature of viruses which led scientists to think that they can be used as vectors, or carriers to deliver a new gene into cells. This approach is called gene therapy. In this approach, some of the viruses’ own DNA is replaced by the gene of interest. The virus carries the gene into the cell and injects it inside the cell. The cell then produces the protein encoded by the gene that the virus was carrying. AAVs can infect both dividing and non-dividing cells.
However there are a few disadvantages in using AAVs for gene therapy. One of these is the small size of their genome which is only 4700 letters long. Therefore they can only carry a small size DNA. Another disadvantage is the difficulty in producing them in large amounts in the laboratory.
Even though AAVs are harmless and do not cause disease, they are still a foreign material for the body and may cause a mild immune response especially if they are administered more than once. However, multiple administrations may not be necessary as the effect may be long lasting (many years) because the virus may persist in the body. The immune system may also attack the protein encoded by the gene that is being transferred because it has never come across it in the past. These problems may be overcome by drugs that suppress the immune response.
To date, 12 naturally-occurring human AAV strains and more than 100 non-human primates’ strains have been discovered. Different AAV strains have shown their preference in targeting different tissues. The availability of many strains increases the potential of AAVs as a delivery vehicle for gene therapy.
Find out more about the stages of drug development here. |
Using movement, active decision-making, and group collaboration, students hypothesize and learn the differences between native species (From Here) and introduced species (From Away), and those that cause environmental harm (Invasive) and those that do not (Non-Invasive).
By participating in this activity students will know and understand:
- The differences between a native, non-native, and invasive species.
- Some native and invasive species that are common to their area.
This activity is based upon one of same name by Kim Fulton in Teaching About Invasive Species, A Green Teacher Publication, Edited by Tim Grant (2014).
- What are some differences between native, introduced, and invasive species?
- How do some organisms cause harm to species, habitats, and ecosystems?
- What invasive species are established in my area?
BC Curriculum Links
Science Big Ideas
- Plants and animals have observable features. (Grade K)
- Living things have features and behaviours that help them survive in their environment. (Grade 1)
- Living things have life cycles that are adapted to their environment. (Grade 2)
- Living things are diverse, can be grouped, and interact in their ecosystems. (Grade 3)
- All living things sense and respond to their environment. (Grade 4)
- Evolution by natural selection provides an explanation for the diversity and survival of living things. (Grade 7)
Science Curricular Competencies
Numerous Science Curricular Competencies are addressed for all grades, such as:
- Ask simple questions about familiar objects and events. (Grade K)
- Sort and classify data and information using drawings or provided tables. (Grades 1-4)
- Make observations aimed at identifying their own questions about the natural world. (Grades 7-8)
- Experience and interpret the local environment. (Grades K-12)
- Express and reflect personal/shared experiences of place. (Grades K-12)
- Pictures of native, non-native, and invasive plants and animals. Recommended minimum is 15-20 pictures (more if you will do a sorting activity with teams; for older students). Select species that are age-appropriate for your group.
- Start with choosing some of the 48 species in the From Here-From Away Species Cards in the Documents to Download section.
- Gather your own images from old calendars, magazines, photographs from your region, or field ID cards.
- Wondering what invasive and native species are common in your area? Help is an email away. Email Invasive-Wise Education at [email protected] and we can provide you with resources and suggestions.
- 4 signs: “From Here”, “From Away”, “Harms” and “Doesn’t Harm”. For younger students only use “From Here”, “From Away”.
- Outdoor field or large indoor area/gym
Documents to Download
For a general background on invasive species and their impacts, read Background on Invasive Species for Educators.
Understanding the differences between native and invasive species is an important first step in getting to know the natural world around us. By becoming more aware of what is ‘from here’, what is ‘from away’, and what species help to maintain a natural balance—or upset the balance—in an ecosystem, students develop a deeper understanding of the role that each species plays in its ecosystem. By developing a greater awareness of the impacts of invasive species, students can take meaningful action to protect local areas from harmful invasive species and make a positive difference in their communities.
There are many terms used to describe and categorize organisms that are ‘from here’ and ‘from away’. Terms can help us to know and clearly communicate to others the characteristics of a species. Below are some of the most important terms and their definitions relevant to this activity. Remember that there are many subtleties and sometimes species don’t fit neatly into one particular category. But that can make for interesting discussions!
Invasive species are non-native organisms that are ‘from away’- that is, they have been introduced, either intentionally or accidentally, into the environment from other areas. Invasive species have high reproductive output and can spread easily and effectively into new areas. Without their natural pathogens and predators, they are capable of moving aggressively into an area, and monopolizing resources such as light, nutrients, water, and space to the detriment of other species. By definition, an invasive species is a non-native organism that causes harm to the environment, economy, and/or society.
All invasive species are non-native (‘from away’), but not all non-native species are invasive. Non-native species, sometimes called exotic , alien, or introduced species are not invasive if they don’t spread and cause harm to native species and ecosystems. For example, many of our agricultural crops, like wheat and tomatoes, are non-native but only survive with care from people. Similarly, some animals, if introduced or let loose in BC, would not survive our climate, or would not spread and cause negative impacts to the environment.
Native species are organisms that are ‘from here’- that is, they have evolved in a location over many (ie. thousands) years. They are adapted to the particular habitat and climate of the region and are a natural part of the food web, including having predators and pathogens. In North America, native species are considered those that were found in that region before the time of European colonization.
Just as some non-native species are not invasive, some native species may cause harm to the environment, economy or society. These may be termed weeds (for plants) or pests (for animals). For example, the mountain pine beetle that has impacted or killed more than 18 million hectares of pine forest in BC, is a wood-boring insect that is native to BC. There was a massive outbreak of the beetle for ~15 years beginning in the 1990s due to numerous factors, including several years of warm winter temperatures. Normally, cold winters would keep populations in check by killing off many beetle eggs and larvae. Although these native insects have invasive qualities, they are not considered an invasive species because, by definition, an invasive species is non-native and is ‘from away’.
Sometimes people use the term weed interchangeably with invasive species. But there are slight differences. A weed is a plant whose presence is undesirable to people in a particular time and place, such as in a garden or lawn. The term ‘weed’ includes a statement about values, not necessarily environmental impacts. A weed could be either native or non-native and isn’t always invasive. For example, a native willow seedling growing in your garden is a weed if you don’t want it there. In contrast, a Noxious Weed is an invasive plant that is regulated by the BC Weed Control Act. This Act imposes a duty upon land occupiers to control these provincially designated, aggressive and destructive species.
Part 1: Set-up and Preparation
- Gather your pictures cards of native, non-native, and invasive plants and animals. Include some easy ones and some that may be more challenging for the level of your group. For younger students, focus on just the two categories of From Here/From Away and some familiar animals and plants, such as From Here: salmon, black bear, Douglas fir, huckleberries and From Away: lions, giraffes, palm trees, mangos. Some invasive species that even younger students may be familiar with include Eastern grey squirrel, European rabbit, English Ivy, Burdock, Himalayan blackberry, and Oxeye daisy.
- Prior to playing the game, do a group brainstorm to gauge the level of knowledge of the students. List some native plants and animals on the board and some non-native species and discuss the concept of invasive species (non-native organisms that cause harm to the environment or society). Identify several invasive species common to your area. Display the pictures and discuss if the group has seen / knows about them.
- Find an appropriate site to play the game where the ground is level, without tripping hazards, and at least 10 meters wide.
- Put up signs in two areas about 10 meters apart: From Here and From Away. Tell the group that you are going to hold up a picture of a plant or animal, and they are to run safely to the From Here sign if it is a native plant or animal, or to the From Away sign if it is an introduced plant or animal. Review definitions and give examples of each. With older students, explain some related terms as well, such as “native”, “indigenous,” and “from within its normal range” all mean “from here”. “Exotic”,” introduced,” “from outside its normal range”, “alien”, and “invasive” all mean “from away”.
Part 2: Play
- Hold up a picture of a plant or animal and say its name. Give the students time to think then say “go”. The students should run to the sign which indicates From Here or From Away. Tell them to make up their own mind and not be swayed by the group. The group is not always right! If everyone is stumped, you could give a clue that provides more information on that species. (If using the From Here-From Away Species Cards you could read off a fun fact about the organism that is printed on the back of the card.)
- Note: This ‘running’ aspect of the game can be modified to any movement to suit student mobility and the size of your space. Mix up the movement as the game goes to include hopping, crawling, crab walk, yoga positions, etc.
- After the dust has settled, discuss whether the plant or animal was ‘from here’ or ‘from away’. Repeat the process for approximately fifteen to twenty plants and animals. You will probably find that the students are more aware and knowledgeable of animals than of plants. It is always worth reminding them of the importance of plants in the ecosystem!
Part 3: Sort and Discuss
- After the running game, have the students work as a group to categorize the pictures of plants and animals into the two groups: From Here (native) and From Away (introduced). Lay the pictures out on the ground under the two signs.
- Discuss the terms “invasive” and “non-invasive” and how some organisms can cause harm to the environment. For younger students, see if they can identify any of the species that are invasive.
- For older students, gather all four signs (From Here, From Away, Harms, Doesn’t Harm) and arrange them into the configuration shown below. Have students work together to sort the species cards into the four categories.
- Remember that only species that are ‘From Away’ are correctly termed an invasive species. Organisms that are native (‘From Here’) and harm the environment may have invasive properties, but they are more precisely termed pests or weeds. (See Background Information for more details.)
|FROM HERE||FROM AWAY|
|DOES NOT HARM the environment or upset the balance of species, habitats, ecosystems||native, indigenous, from within its normal range||exotic, introduced, non-native, alien, from outside its normal range|
|HARMS the environment or upsets the balance of species, habitats, ecosystems||pest, weed||invasive|
- Encourage discussion and ask the students if they have seen any of the plants and animals in the pictures.
Adaptations for Younger and Older Students
- Younger students: Instead of using photographs to categorize, gather an assortment of stuffed animals (black bear, monkey, whale, giraffe, elephant, squirrel, tiger, etc.) to play the game. Make sure to have enough stuffies for each student so that they can ‘adopt’ one for the day and learn about its habitat and if it is ‘from here’ or ‘from away’.
- Older Students: Instead of playing a running game, divide the class up into 2-4 teams. Create a labelled 2×2 sorting grid, as shown above, for each team (e.g. on a posterboard or across 4 desks pushed together where each desk represents one of the quadrants). Give each team a set of species cards that includes representation of at least one species in each of the quadrants. Give the teams 10-15 minutes to work together to sort the pictures. The winning team is the one that got the most right (and who worked well together/cooperated the most!). Discuss the tricky ones that were hard to categorize and any that were surprising.
Share with us!
We’d love to have your feedback and see photos of your students’ learning and participation in this activity. Send to [email protected] for the opportunity to win resources and have your class have a virtual visit with an invasive species expert!
- Have each student choose an invasive species from your region to research. Where are they native to and how did they get here? What are people doing to manage them and prevent their spread?
- Go outdoors to look for examples of native, non-native, and invasive species. Use identification field guides or apps, such as SEEK or iNaturalist, to help identify species.
- Create your own field guide to common species in your region using sketches, photographs, or pressed plant specimens. Include a variety of species that are from here (native), from away (non-native), harm to the environment (invasive or pests and weeds), or don’t harm the environment (introduced/exotic). |
The comet is made of ice and dust and emits a greenish aura, and is estimated to have a diameter of about a kilometre.
A comet that will shoot past Earth and the Sun in the coming weeks for the first time in 50,000 years could be visible to the naked eye.
The comet is expected to pass nearest to Earth on February 1 and will be easy to spot with a good pair of binoculars or even with the naked eye, provided the sky is not too illuminated by city lights or the Moon.
As a fuller Moon could make spotting it difficult, the new Moon during the weekend of January 21-22 offers a good chance for stargazers, Nicolas Biver, an astrophysicist at the Paris Observatory, told the AFP news agency.
The comet was called C/2022 E3 (ZTF) after the California-based Zwicky Transient Facility, which first spotted it passing Jupiter in March last year.
Biver said the comet is made of ice and dust, and emits a greenish aura.
It is estimated to have a diameter of about a kilometre. That makes it significantly smaller than NEOWISE, the last comet visible with an unaided eye, which passed Earth in March 2020, and Hale-Bopp, which swept by in 1997 with a diameter of about 60 kilometres (37 miles).
But the latest visit will come closer to Earth, which “may make up for the fact that it is not very big”, Biver said.
“We could also get a nice surprise and the object could be twice as bright as expected,” he added.
The comet is believed to have come from the Oort Cloud, a theorised vast sphere surrounding the solar system that is home to mysterious icy objects.
The last time the comet passed Earth was during the Upper Paleolithic period, when Neanderthals still roamed Earth.
Biver said there was a possibility that after this visit the comet will be “permanently ejected from the solar system”.
Among those closely watching will be the James Webb Space Telescope. However, it will not take images, instead studying the comet’s composition, the astrophysicist said.
Thomas Prince, a physics professor at the California Institute of Technology who works at the Zwicky Transient Facility, told AFP that the comet’s proximity to the Earth made it easier for telescopes to measure its composition “as the Sun boils off its outer layers”.
This “rare visitor” will give “us information about the inhabitants of our solar system well beyond the most distant planets”, he added.
Prince said another opportunity to locate the comet in the sky will come on February 10, when it passes close to Mars.
#50000year #comet #visible #Earth #Space #News |
[SOUND] I'm now at the third major part of my lecture. I want to talk about the federal legislative power. Article I of the Constitution defines the powers of Congress. Specifically, it says that the legislative power will be vested in a Congress of the United States, which is all of the powers herein granted. It then proceeds to create Congress in two bodies. The Senate, where every state would have equal representation. Every state regardless of size has two senators. And the House of Representatives, where representation is allocated on the basis of population. Article I, Section 8 is quite crucial, enlisting the powers of Congress. Article I, Section 9 imposes some limits on the powers of Congress. In discussing the federal legislative power, I want to focus on three major questions. First, I want to talk about how does McCullough versus Maryland define the powers of Congress? Second, I want to know about what are the major powers of Congress under the Constitution? And third, I want to talk about to what extent is the existence of state governments a limit on Congress' power? So let me begin with the first of these questions. How does McCullough versus Maryland define the power of Congress? No Supreme Court case in history is more important than McCullough versus Maryland when it comes to defining the powers of the United States Congress. I am fond of saying to my students that McCullough versus Maryland is as the Congress' power as Marbury versus Madison is as to the federal judicial power. McCullough versus Maryland is a decision from 1819. It lays out a framework with regard to congressional authority and the relationship between that and the states that exists to this day. It's a controversy that involved the creation of the Bank of the United States. Early in the presidency of George Washington, a controversy arose as to whether to create a Bank of the United States, a place to hold the assets of the United States government. George Washington asked various people in his cabinet, and Thomas Jefferson as Secretary of State said the United States government doesn't have the authority to create a Bank of the United States. Congress can only do the things that are herein granted, and there's nothing in Article I, Section 8 that says Congress can create a bank. He went to his attorney general, Edmund Randolph, and Randolph said Congress doesn't have the authority to create a Bank of the United States, it's not in the Constitution. But his secretary of treasury, Alexander Hamilton, said yes, Congress has the authority to create the Bank of the United States. So isn't it interesting when we think about things like originalism and framer's intent, that you have these individuals who played key roles in the drafting of the Constitution? And they disagreed among themselves as to whether Congress had the power to create a Bank of the United States. But Washington followed Hamilton's advice. Congress went along, and Congress passed the authority for a Bank of the United States. Interestingly, one of the foremost opponents of Congress having the authority to do this was a congressman then from Virginia, James Madison, who said Congress can only do the things that are mentioned in the Constitution. And there's no mention of a power to create a Bank of the United States. The Bank of the United States existed through the early part of American history without controversy. The authority for it expired. It went out of existence. Then there was the War of 1812. And we tend to forget what a difficult war it was for the United States, as England attacked the United States and tried to reclaim its former colony. The United States won the War of 1812, but the United States government experienced a severe liquidity crisis. It didn't have the assets to fight the war and to keep the government going. And so the then president of the United States, James Madison, proposed that a Bank of the United States be created. Remember of course, Madison was a foremost opponent of there being authority for a Bank of the United States. A new Bank of the United States was created. The United States government owned only a 20% share of this bank. And for the first few years, again, it operated without incident. But then the United States experienced a terrible recession. And the Bank of the United States, as it had the authority to do, asked that some of the loans that had been made repaid. Well, some of the loans that it had made were to state governments and it asked that these loans be repaid. The state governments didn't have the money. They too were experiencing financial problems because of the recession. And there was great tension between the United States government, the Bank of the United States, and the states. In one instance, the Bank of the United States went to collect its debts from the state of Maryland, and the state of Maryland objected to having to pay the Bank of the United States. And so what happened was that the state of Maryland files a lawsuit against the cashier of the Bank of the United States, a man by the name of McCullough. What we know from history is that McCullough was a crook. He was the cashier of the Bank of the United States. Was regularly embezzling money from the Bank of the United States, which is an aside to this story, but it's an interesting background. And the case comes to the United States Supreme Court. And John Marshall is the Chief Justice still. And the Supreme Court has to face the question, here's how it comes up. Maryland has imposed a tax on the Bank of the United States. Maryland has gone to collect the tax on the Bank of the United States. It sues the cashier of the Bank of the United States to collect the tax. Does the Bank of the United States have to pay that tax? That's what McCullough versus Maryland was all about. The Supreme Court says it's going to address two questions. First, does Congress have the authority to create the Bank of the United States? And second, if so, does the state have the authority to tax the Bank of the United States? The Supreme Court, with John Marshall writing, rules in favor of the United States and against the state of Maryland on both issues. As to the first question, does Congress have the authority to create the Bank of the United States? John Marshall begins by saying that we don't write on a blank slate here. This isn't the first time that Congress has done this. He said, Congress, early in American history, created a Bank of the United States when George Washington was president. He said, that should influence our decision. Now it's interesting that Congress had created a Bank of the United States then, but its constitutionality had never been challenged in court. Was that practice a sufficient basis to justify creating this Bank of the United States? To what extent do practices influence the interpretation of the Constitution, or should the court just focus on the original understanding? because back what I talked about earlier in terms of the debate over interpretation. But John Marshal says, writing for the court, we've had a Bank of the United States, that shows that this is constitutional. Now the states that argued to the Supreme Court that it was the state governments that had created the Constitution, and that the states therefore were sovereign. They said the United States government is just a compact among the states, that sovereignty ultimately rests in the states, and that it doesn't rest in Congress. And John Marshall, in explaining why Congress had the authority to create the Bank of the United States, expressly rejects this. He says, it wasn't the states that ratified the Constitution. It was the people that ratified the Constitution. So it's not state governments that are sovereign, but it's the people of the United States who are sovereign. Now I think this is a wonderful, romantic notion, but I'm not sure it's true. Article VII of the Constitution says that the Constitution will be valid when three-quarters of the states ratified it. Was it really the people who ratified it? There was no national plebiscite, no national referendum on the Constitution. But here John Marshall unequivocally says, the Court unequivocally declares, it's the United States government that has the sovereignty vested in it by the people. It's not the states that are sovereign. This has been important at times in American history. There have been times when states have wanted to resist what the United States government is doing, and they wanted to claim that the Constitution is just a compact among the states, and the states are sovereign. John Calhoun wanted to oppose the abolition of slavery on this basis. Southerners in the 1950s and 60s wanted to oppose desegregation by proclaiming the states are sovereign. McCullough versus Maryland rejects that. Still talking about the authority of Congress to create the Bank of the United States, John Marshall then says, Congress has all powers that are not prohibited to carry out its authority under the Constitution. It's here that John Marshall says we must never forget that it's the Constitution we're expounding. He says the Constitution doesn't have, and I'll quote his word, the prolixity, the detail of a statute. He says the Constitution is just meant to be the outlines of what the government can do. And he says so long as Congress has a power, it can choose any means not prohibited to carry out that power. And having said that, he finally, in discussing Congress' power to create the Bank of the United States, talks about a clause in Article I, Section 8 called the Necessary and Proper Clause. This provision in Article I, Section 8 of the Constitution says that Congress can take all actions that are necessary and proper to carry out its authority. Now, what the state of Maryland argued here was that the Necessary and Proper Clause means that Congress can just do those things that are necessary in the sense of essential to carrying out its power. Usually when the word necessary comes up in constitutional law, that's when it means essential. But John Marshall says no. John Marshall says the Necessary and Proper Clause just means that Congress has to show that its action is beneficial, is helpful to carrying out its power. He says the Necessary and Proper Clause is found in Article I, Section 8, which is the powers of Congress, not Article I, Section 9, which are the limits on Congress' power. It's meant to enlarge Congress' power, not to restrict it. So notice that Marshall says even if we're getting the Necessary and Proper Clause, Congress can chose any means not prohibited by the Constitution to carry out its powers, and he says the Necessary and Proper Clause confirms that, does not limit it. Why does this matter so much? We'll take an example, a power Congress has under Article I, Section 8, the authority to raise the army and the navy. Think to yourself for a moment, of all of the things that the Congress might do as a way to raise an army and a navy. It might create a military draft. It might create military bases. It might create pay for the voluntary army. Gosh, Congress could create a national bake sale to fund the army and the navy. None of those things, or all of an infinite list of others, are mentioned in the Constitution. Never does the Constitution say that Congress can have a draft. Never does it say it can have a national bake sale. But Congress can do all of these things as means not prohibited by the Constitution. And Congress doesn't have to prove, unless its government has established that these are essential, that they're the only way to do it. Congress just has to show that they're helpful, beneficial way. This is a tremendous expansion of Congress' power. Congress can do literally anything that's not prohibited by the Constitution to carry out its authority. And so the Supreme Court concludes Congress has the authority to create the Bank of the United States. Having done that, John Marshall then goes on to the second question. Is the states' tax on the Bank of the United States constitutional? And I think so persuasively established that Congress had the power to create the bank, it's relatively easy for the Supreme Court to explain why the tax is unconstitutional. John Marshal, writing for the court, says the power to tax is the power to destroy. If states could tax the Bank of the United States, they could tax it out of existence. He says the power to create has to imply the power to preserve. So the state taxing the Bank of the United States is acting in a way that's incompatible with Congress' authority. Now the state could argue this is a small tax by the state of Maryland on the tax of the United States. It wasn't going to tax it out of existence. But the Supreme Court says it's not appropriate to let one state tax what in essence is the money from other states. That when Maryland is taking the Bank of the United States, it's taxing money that's coming from Massachusetts and New York, and those aren't people who have representation in the Maryland political process. It's wrong for Maryland to be able to impose a tax on people from other states who don't have representation in the Maryland state legislature. John Marshall says, we can't just put confidence in the state governments and trust in them. We can't give them a power that's inconsistent with the existence of the national government. So notice what McCullough versus Maryland does. It very broadly defined the scope of Congress' power. Congress can now do anything that it finds to be useful in carrying out its authority so long as it's not an action prohibited by the Constitution. And McCullough versus Maryland says there's a limit on what the states could do. States cannot act in a manner that's inconsistent with the existence of the national government. And to this day, states cannot tax the federal government. States can not regulate the federal government in a way that put a substantial burden on federal activity. And these are principles that comes from McCullough versus Maryland. |
Chapter 19 Celestial Distances
19.1 Fundamental Units of Distance
By the end of this section, you will be able to:
- State the importance of defining a standard distance unit
- Explain how the meter was originally defined and how it has changed over time
- Discuss how radar is used to measure distances to the other members of the solar system
The first measures of distances were based on human dimensions—the inch as the distance between knuckles on the finger, or the yard as the span from the extended index finger to the nose of the British king. Later, the requirements of commerce led to some standardization of such units, but each nation tended to set up its own definitions. It was not until the middle of the eighteenth century that any real efforts were made to establish a uniform, international set of standards.
The Metric System
One of the enduring legacies of the era of the French emperor Napoleon is the establishment of the metric system of units, officially adopted in France in 1799 and now used in most countries around the world. The fundamental metric unit of length is the meter, originally defined as one ten-millionth of the distance along Earth’s surface from the equator to the pole. French astronomers of the seventeenth and eighteenth centuries were pioneers in determining the dimensions of Earth, so it was logical to use their information as the foundation of the new system.
Practical problems exist with a definition expressed in terms of the size of Earth, since anyone wishing to determine the distance from one place to another can hardly be expected to go out and re-measure the planet. Therefore, an intermediate standard meter consisting of a bar of platinum-iridium metal was set up in Paris. In 1889, by international agreement, this bar was defined to be exactly one meter in length, and precise copies of the original meter bar were made to serve as standards for other nations.
Other units of length are derived from the meter. Thus, 1 kilometer (km) equals 1000 meters, 1 centimeter (cm) equals 1/100 meter, and so on. Even the old British and American units, such as the inch and the mile, are now defined in terms of the metric system.
Modern Redefinitions of the Meter
In 1960, the official definition of the meter was changed again. As a result of improved technology for generating spectral lines of precisely known wavelengths (see the chapter on Radiation and Spectra), the meter was redefined to equal 1,650,763.73 wavelengths of a particular atomic transition in the element krypton-86. The advantage of this redefinition is that anyone with a suitably equipped laboratory can reproduce a standard meter, without reference to any particular metal bar.
In 1983, the meter was defined once more, this time in terms of the velocity of light. Light in a vacuum can travel a distance of one meter in 1/299,792,458.6 second. Today, therefore, light travel time provides our basic unit of length. Put another way, a distance of one light-second (the amount of space light covers in one second) is defined to be 299,792,458.6 meters. That’s almost 300 million meters that light covers in just one second; light really is very fast! We could just as well use the light-second as the fundamental unit of length, but for practical reasons (and to respect tradition), we have defined the meter as a small fraction of the light-second.
Distance within the Solar System
The work of Copernicus and Kepler established the relative distances of the planets—that is, how far from the Sun one planet is compared to another (see Observing the Sky: The Birth of Astronomy and Orbits and Gravity). But their work could not establish the absolute distances (in light-seconds or meters or other standard units of length). This is like knowing the height of all the students in your class only as compared to the height of your astronomy instructor, but not in inches or centimetres. Somebody’s height has to be measured directly.
Similarly, to establish absolute distances, astronomers had to measure one distance in the solar system directly. Generally, the closer to us the object is, the easier such a measurement would be. Estimates of the distance to Venus were made as Venus crossed the face of the Sun in 1761 and 1769, and an international campaign was organized to estimate the distance to the asteroid Eros in the early 1930s, when its orbit brought it close to Earth. More recently, Venus crossed (or transited) the surface of the Sun in 2004 and 2012, and allowed us to make a modern distance estimate, although, as we will see below, by then it wasn’t needed. This transit is pictured in Figure 1.
The key to our modern determination of solar system dimensions is radar, a type of radio wave that can bounce off solid objects. A radar is pictured in Figure 2. As discussed in several earlier chapters, by timing how long a radar beam (traveling at the speed of light) takes to reach another world and return, we can measure the distance involved very accurately. In 1961, radar signals were bounced off Venus for the first time, providing a direct measurement of the distance from Earth to Venus in terms of light-seconds (from the roundtrip travel time of the radar signal).
Subsequently, radar has been used to determine the distances to Mercury, Mars, the satellites of Jupiter, the rings of Saturn, and several asteroids. Note, by the way, that it is not possible to use radar to measure the distance to the Sun directly because the Sun does not reflect radar very efficiently. But we can measure the distance to many other solar system objects and use Kepler’s laws to give us the distance to the Sun.
From the various (related) solar system distances, astronomers selected the average distance from Earth to the Sun as our standard “measuring stick” within the solar system. When Earth and the Sun are closest, they are about 147.1 million kilometres apart; when Earth and the Sun are farthest, they are about 152.1 million kilometres apart. The average of these two distances is called the astronomical unit (AU). We then express all the other distances in the solar system in terms of the AU. Years of painstaking analyses of radar measurements have led to a determination of the length of the AU to a precision of about one part in a billion. The length of 1 AU can be expressed in light travel time as 499.004854 light-seconds, or about 8.3 light-minutes. If we use the definition of the meter given previously, this is equivalent to 1 AU = 149,597,870,700 meters.
These distances are, of course, given here to a much higher level of precision than is normally needed. In this text, we are usually content to express numbers to a couple of significant places and leave it at that. For our purposes, it will be sufficient to round off these numbers:
We now know the absolute distance scale within our own solar system with fantastic accuracy. This is the first link in the chain of cosmic distances.
Key Concepts and Summary
Early measurements of length were based on human dimensions, but today, we use worldwide standards that specify lengths in units such as the meter. Distances within the solar system are now determined by timing how long it takes radar signals to travel from Earth to the surface of a planet or other body and then return. |
The Importance of Caring and Sharing===
Caring and sharing are two essential human behaviors that can improve our social, emotional, and mental well-being. It involves empathizing with others, showing compassion, and being willing to help those in need. When we care and share, we create a positive impact on our communities, environment, and society in general. In this article, we will explore the different ways caring and sharing can make a significant difference in our lives.
Building Empathy through Caring and Sharing
One of the most significant benefits of caring and sharing is that it helps us build empathy. Empathy is the ability to understand and share the feelings of others. When we care and share, we become more aware of the struggles and hardships that others face. It allows us to connect with people at a deeper level and develop a sense of compassion towards them. It also helps us become less judgmental towards others and more open-minded.
Caring and Sharing for Mental Health
Caring and sharing can have a positive impact on our mental health. When we help others, it releases endorphins in our brain, which makes us feel good. It can also reduce stress and anxiety levels and improve our overall well-being. Additionally, when we care and share, it gives us a sense of purpose and fulfillment, which can positively impact our mental health.
Sharing Resources to Enhance Quality of Life
Sharing resources can help enhance the quality of life for those who have limited access to them. It can help bridge the gap between the rich and the poor, and create a more equitable society. When we share resources such as food, shelter, and clothing, we help meet the basic needs of others and improve their living conditions. It can also help reduce waste and promote sustainable living.
Caring for the Environment through Sharing
Caring for the environment is crucial for our survival, and sharing resources can help us achieve that. When we share items such as bikes, cars, or public transportation, we reduce the number of vehicles on the road, which can reduce air pollution. Additionally, sharing tools and equipment can reduce waste and promote sustainable living. It can also help build a sense of community and encourage people to work together towards a common goal.
Empowering Communities through Caring and Sharing
Caring and sharing can help empower communities and promote social change. When we care for our neighbors and share resources, we create a sense of community and empower people to take action towards common goals. It can also help build trust and encourage people to work together to overcome social issues such as poverty, discrimination, and inequality.
Caring and Sharing in the Workplace
Caring and sharing can also have positive effects in the workplace. When we care for our colleagues and share knowledge and resources, it can improve collaboration and teamwork. It can also create a positive work environment, increase job satisfaction, and improve overall productivity. Additionally, when we care for our employees, it can reduce absenteeism and increase employee retention rates.
Encouraging Positive Relationships through Caring and Sharing
Caring and sharing can help encourage positive relationships with others. When we care for our friends and family, it strengthens our relationships and creates a sense of trust and loyalty. It can also help us connect with others on a deeper level, which can improve our emotional well-being. Additionally, when we share our experiences and knowledge with others, it can create a sense of community and promote personal growth.
Caring and Sharing for the Greater Good
Caring and sharing can have a significant impact on society as a whole. It can help promote social change and create a more equitable and sustainable world. When we care for our planet and share resources with others, it can create a more harmonious and peaceful world. Additionally, when we care for our neighbors, it promotes kindness and empathy, which can positively impact our society.
Caring and Sharing in Education
Caring and sharing can also be integrated into education to promote positive values and behaviors. It can help students develop empathy, compassion, and a sense of community. Additionally, when teachers care for their students, it can create a positive learning environment and improve academic performance. It can also help students develop essential life skills such as problem-solving, collaboration, and communication.
Supporting Diversity through Caring and Sharing
Caring and sharing can help support diversity and promote inclusivity. When we care for people from different backgrounds and share resources and experiences, it promotes understanding and acceptance. It can also help reduce discrimination and promote social justice. Additionally, when we care for each other, it creates a sense of belonging and encourages people to celebrate their differences.
Cultivating Gratitude through Caring and Sharing
Caring and sharing can help cultivate gratitude and appreciation for what we have. When we care for others and share our resources, it creates a sense of abundance and generosity. It can also help us recognize the privileges we have and become more grateful for them. Additionally, when we care for ourselves and others, it promotes self-love and encourages us to be kind to ourselves.
In conclusion, caring and sharing are essential human behaviors that can improve our social, emotional, and mental well-being. It can help build empathy, promote social change, and create a more harmonious and peaceful world. When we care for ourselves, our communities, and our planet, we create a more equitable and sustainable world for generations to come. Let’s continue to practice caring and sharing and make a positive impact on our world. |
Peace in Ancient Egypt
By Vanessa Davies
The existence of an ancient treaty is exciting enough. But when that 3,200-year-old treaty was concluded 16 years after the cessation of battle, it demands even greater attention. What did peace mean in that context for the Egyptians? As the result of proper action, it was a value frequently represented in the world of monumental depiction, one that united all beings.
The hieroglyphic copy of the treaty settled by Ramesses II and Hattušili III, Karnak, Egypt. (Wikimedia Commons)
The treaty was concluded by Ramesses II of Egypt and the Hittite king Hattušili III in the 21st year of Ramesses' reign, which was the 10th year of Hattušili's reign, corresponding to the mid-thirteenth century BCE. Prior to the settling of the treaty, a major confrontation between Egyptian and Hittite forces had occurred at the infamous battle at Qadesh. There, in the northern Levant, the forces of a young Ramesses II clashed with those of Muwatalli, the brother of Hattušili.
Ramesses in battle at Qadesh, Ramesseum, Egypt. (Wikimedia Commons)
In the intervening 16 years between the battle at Qadesh and the treaty, many changes occurred on the Hittite throne. Yet no record exists of any further major military engagement between the two sides. So it sounds incongruous that, as the beginning of the treaty tells us, a messenger arrived in the Egyptian capital with a silver tablet bearing a request from Hattušili for peace. Why then, and what was meant by "peace" in the Egyptian version of the text?
Hattusili III (right) pours a libation before a deity (left), Firaktin relief, Turkey. (Wikimedia Commons)
Copies of the treaty were found written on tablets in cuneiform at the Hittite capital, Hattusa. In Egypt, copies are found carved on the walls of the Temple of Karnak and the Ramesseum at Thebes. From these monumental contexts, we must deduce the meaning of the Egyptian word for "peace."
There are more than a few ancient Egyptian words that we translate as "peace" or with a similar synonym. In order to understand the meaning of peace in the treaty, one must focus on the word used in that text: hetep. In hieroglyphs, hetep is written as a loaf of bread placed in the center of a reed mat.
The word hetep written in hieroglyphs. On top is the bread loaf on reed mat (phonetic H-T-P), and underneath are the repeated phonetics, the square (P) and the half-circle (T). On the right is the nefer hieroglyph. (Wikimedia Commons)
Besides meaning "peace" in the Ramesses-Hattušili treaty, the word hetep has other senses, including "rest," "offering," and "contentment." Hetep is the "rest" of a deity in its shrine, the dead in the tomb, and the Egyptian king on the throne. As "offering," hetep is the food, drink, ointment, incense, and other goods presented by the king to deities and by the living to the dead. It is the "offering table," which often takes the shape of the hetep hieroglyph. Hetep is also the "contentment" that a deity experiences because of particular actions of the king, such as building a temple or increasing the number of festivals for the deity. It is also the "contentment" of two disputing parties who have come to an agreement and the "contentment" that the recipient of offerings (also hetep) experiences.
Stone offering table, Walters Art Museum 2291. Note that the table takes the shape of the hetep hieroglyph. Carved on the table top are a variety of types of food and another representation of the hetep hieroglyph (at the top of the table and upside down to the viewer). (Wikimedia Commons)
The underlying meaning that unites these different uses of the word hetep connects to the Egyptian idea of maat. The abstract concept of maat, embodied by the goddess Maat, refers to the proper order of life on earth and the proper order of the universe. The natural order of life encapsulated as maat is, for instance, the progression of stars across the night sky and the annual rising and falling of the Nile floodwaters. When individuals act towards another in accord with maat, the recipient experiences hetep. Hetep is the result of action in accord with maat.
The Egyptian goddess Maat. The feather in her headband is the hieroglyph for the word maat. (Wikimedia Commons)
The concrete meaning of hetep, the items that are "offerings" to deities and to the dead, are physical representations of the abstract concept of "peace, contentment, and rest," which is what the recipient experiences when an actor, such as the king building a temple or a child remembering dead parents, acts in a just or proper (maat) fashion. The offering scene is a way to represent an individual's maat-action and production of hetep for another.
The Meroitic king Aqramani presenting a burnt offering to deities, Dakka. In the center of the offering is the hetep hieroglyph. (Wikimedia Commons)
Thus, the "peace" described by the word hetep must be understood from a particularly Egyptian perspective. In his battle reliefs at Karnak, the Egyptian king Seti I is also hetep when he sees blood after chopping off the heads of troublesome Bedouin. The Bedouin, we can safely presume, were not so content or at peace with Seti's action.
Seti returning to Egypt with captured bedouin prisoners, Karnak. (Wikimedia Commons)
But hetep is not simply a way to indicate approval for any action of the Egyptian king. Maat-actions entail particular codes of behavior. So hetep is quite unlike the concept of peace described in the words of the Celtic tribal leader in Tacitus's Agricola: "Now we are between the ocean and the Romans, who in their greed of money and conquest spare no one; who call massacre and plunder, empire; and the desert they have made, peace."
Bronze offering table with model vessels, Abydos, British Museum EA 5315. On both the short and the long sides, the table takes the form of the hetep hieroglyph. (Wikimedia Commons)
An Egyptian account of events on the Qadesh battlefield gives us an additional glimpse of the complexity of hetep. With the Egyptian and Hittite forces failing to make headway against one another, Muwatalli sent Ramesses a message, asking for hetep. The Egyptian generals, when advising Ramesses to agree to the request, say, "There is no blame in hetep when you do it. Who will respect him (i.e., the Hittite king) when you are angry?" (A different account reads, "Who will resist you when you are angry?")
Statue group and offering table, Abydos, Louvre 228. The offering table is in the shape of the hetep hieroglyph, and it has a hetep hieroglyph depicted in its center (both upside down to the viewer). (Wikimedia Commons)
The Egyptian generals try to mitigate Ramesses's misgivings by reassuring him that there is no blame in agreeing to hetep rather than orchestrating a defeat on the battlefield. The generals' speech signals to us that someone, if not Ramesses then perhaps other Egyptian officials, military personnel, or even the divine, might assign him some blame in that situation.
Taharqa presents offerings to deities, Gebel Barkal. The hetep hieroglyph appears twice in the offering table, one on top of the other.
With hetep, the type of rhetoric and the creation of community are central. Whether in the offering of goods or in battle, the giving or causing of hetep involves the donor establishing a relationship with the recipient. Through their interaction, the donor recognizes the recipient and behaves towards the recipient in a manner appropriate to their respective social roles.
Stela depicting the lector priest Siamun and his mother, the singer Amenhotep, receiving offerings from Siamun's wife, the chantress Iretnofret. The hetep hieroglyph appears in the center of the offering table between the figures. (The Metropolitan Museum of Art)
From the Egyptian perspective, the Ramesses-Hattušili treaty produces hetep in just that way: the two kings establish a relationship with one another, they each recognize the other, and they affirm that they will behave in ways appropriate to their social roles. The two kings recognize one another when they list their genealogies going back two generations. In the treaty's terms, they detail the specific actions they will take with regard to one another, actions that are deemed appropriate (maat) from an Egyptian perspective. The two kings thus produce hetep for one another.
Nome deities presenting goods, temple of Ramesses II at Abydos. The hetep hieroglyph appears in the center of the goods. (Wikimedia Commons)
The cycle of maat-action and production of hetep played out in the monumental texts and art of Egypt: the stone temples and tombs of the elite and the statues and stelae of the elite and non-elite. But the actions, depictions, and statements of individuals in monumental depictions do not necessarily bear resemblance to a lived reality. In the same way that we do not imagine that Ramesses always looked as physically fit as his temple depictions suggest, neither should we presume that Seti was necessarily hetep in an emotional sense at seeing blood when he decapitated the Bedouin.
The world of monumental depiction had certain purposes and was not intended to be a "factual," eyewitness, or unbiased account, but a testament to a set of values. In that depicted world, all members could cause or give hetep to any other member, and so a community existed there that joined the living and the dead, Egyptian and foreigner, deity, king, and non-royal. Like offering scenes to deities, the Egyptian-Hittite treaty functions as a record of the Egyptian king's maat-action and production of hetep.
Artist's practice carving of a king holding up goods in the traditional pose of offering in front of a deity, Walters Art Museum 22266. (Wikimedia Commons)
Vanessa Davies has a Ph.D. from the University of Chicago.
-- Sent from my Linux system.
Post a Comment |
We have lots of opportunities to encourage writing throughout the indoor and outdoor environment and handwriting lessons.
Please encourage your children's writing (even if you can not understand it!) they are proud of their effort and feel confident with their writing tools.
We will have lots of practise writing the letters in our phonics and handwriting lessons. The children are encouraged to use their sound mats to form the letters correctly.
Later we will work on writing a good sentence which includes a capital letter at the start, finger spaces between each of the words and a full stop at the end.
An extra challenge is to include some WOW words (adjectives) in our sentence to make them sound more exciting.
Challenge - Can you write a super sentence using all of the above things?
Looking at Traditional Tales will give us lots of different ways to start our own stories such as;
We will also have a look at ways to end our story such as;
Challenge - Can you write a short story using one of these beginnings and endings? |
What were some of the earliest known reptiles?
Two of the earliest known reptiles, the Hylonomus and Paleothyris, both descended from amphibians during the Middle Carboniferous period of the Paleozoic era.
Just like this paddle tail newt, ancient amphibians survived on both land and sea; they were the first animals to survive for extended periods outside the water (iStock).
The best evidence of the change from amphibian to reptile was the early reptiles high skulls evidence of additional jaw muscles and thicker egg shells. The Hylonomus still claims the prize (so far) as the oldest-known reptile and lived about 315 million years ago. The Paleothyris evolved about 300 million years ago. The fossils of both these reptiles were found near Nova Scotia, Canada, in ancient tree stumps. Apparently, the animals fell into tree stumps in pursuit of insects or worms. There they were trapped and eventually died.
Why did the reptiles dominate during the Mesozoic but not the amphibians?
Besides the ability to not depend on water as much as amphibians, there are probably two main reasons why reptiles became dominant in the Mesozoic. First, reptiles developed adaptations in their skeletal structure, allowing them to move much quicker than amphibians. Second, during the Permian period the climate became hotter and drier, and many water sources disappeared. The reptiles new adaptations from the development of scales to retain water to eggs that could survive without staying in water allowed them to thrive where amphibians could not.
Did some reptiles return to the oceans?
Yes, as the reptiles spread out over the land, some of them returned to the water. Over a period of time, they evolved and adapted to the water again. Their legs gradually evolved back into fins and flippers; eyes adapted to seeing underwater; and bodies became streamlined for better speed in the water. In addition, they could no longer lay their eggs on land. Thus, they evolved a way of producing living young within their bodies, a process called ovoviviparous. The Ichthyosaurs, or fish lizards, were the most fishlike true reptiles.
Though it looks much like modem fish do today, the Ichthyosaur was still a reptile, and one of the first true reptiles to live in the water exclusively (iStock).
How are reptiles grouped?
During the 100 million years after the first reptiles appeared, various reptile lines continued to evolve. Today, it is difficult to find agreement about reptile classification. In most cases, they are divided into four living orders (the others have died out over time):
Crocodilia Crocodiles, alligators, gharials, and caimans, comprising 23 known species.
Squamata Lizards, snakes, and the worm lizards, or amphisbaenids, which make up about 7,900 species.
Testudines Turtles and tortoises, which includes about 300 species.
Sphenodontia The endangered tuatara, which can only be found in New Zealand and consists of two species.
There is also another older method of grouping reptiles: subclasses according to the positioning of the temporal fenestrae, or the openings in the sides of the skull behind the eyes: the anapsids, synapsids, diapsids, and euryapsids. The anapsids had no openings in the skull and eventually evolved into todays turtles and tortoises. The synapsids, or same hole, had a low skull opening, and were once thought to be the ancestors of modern mammals (and are now not considered to be true reptiles). It was the animals of the diapsid line, or two skull openings, that eventually gave rise to the dinosaurs. Another debatable line is the euryapsids, characterized by a single opening on the side of the skull, which are now usually included with the diapsids. |
On March 28, Florida Governor Ron DeSantis signed a controversial bill into law known as the “Don’t Say Gay or Trans” bill. The law, which is broad and vague, prohibits instruction about sexual orientation or gender identity in grades K-3 and bans instruction about sexual orientation or gender identity in grades 4-12 if it is deemed “not age-appropriate or developmentally appropriate for students.” Because primary school instruction is generally discussion-based, some schools have interpreted the law to restrict anything that might elicit discussion about LGBTQ+ people or issues. The law has also imposed new notification requirements on teachers and schools, causing fear that schools will be forced to “out” students and subject them to harm. The result is that the law has silenced LGBTQ+ people and families and left students, teachers, and families guessing about what it means to comply with the law.
Given the law’s ambiguity, there may be confusion about what it does and means for LGBTQ+ youth and their families. Here are some frequently asked questions: |
Yes, we all are.
When discussing the spread of infectious diseases, it is useful to understand the concept of “herd immunity.” Infectious diseases spread from individual to individual within the population (or herd). The more individuals a person in the infectious stage of a disease contacts, the more quickly the disease spreads throughout the population. Isolation, quarantine, and physical distancing strategies are all ways to curtail this spread.
So is immunization. We tend to think of immunization as protection for the inoculated individual, and it does indeed do that. But it also protects others in the population. Individuals who have become immune to a disease, whether by vaccination or by contracting the disease and recovering from it, can no longer infect others. By reducing the number of persons who can spread a disease, the progression of it through the population is slowed or halted. Individuals who are not immune are less likely to contact a person who can infect them. This is the basis of herd immunity.
|CDC graphic explaining herd immunity|
Herd immunity does not mean that every individual in the population is immune to the pathogen. In fact, some individuals may not be able to become immune, due to immunodeficiency or immunosuppression. The best way to protect these individuals is through herd immunity, by making it less likely that they would come in contact with someone who is infectious.
Some important things to remember about herd immunity include that it is specific to a particular pathogen, and that it takes time to develop. The length of time required will vary by location and by characteristics of the pathogen. A higher level of immunity among the population is required to establish herd immunity from highly contagious diseases. Those individuals who are immune can come out of isolation and begin to go about their normal daily activities without fear of either contracting the disease or passing it along to others. |
On October 29, 1922, as Benito Mussolini’s Black Shirt paramilitaries threatened to seize power in Rome, King Victor Emmanuel III appointed Mussolini as Prime Minister. It was a coup d’état that ushered in two decades of dictatorship and military disaster.
Mussolini’s was the first fascist party to come to power in Europe. The Partito Nazionale Fascista was a model for Adolf Hitler and the Nationalsozialistische Deutsche Arbeiterpartei: Hitler’s first attempt at seizing power came a year later in Munich. Although German Faschismus would outdistance Italian fascismo in ruthlessness and mass murder, the Italians were nevertheless the Germans’ first teachers, particularly when it came to colonialism. As historian Patrick Bernhard explains,
For Nazi Germany, Italian practices and experiences in colonial population management served as a model and “best practice” example, crucially informing German plans for the settling and ethnic remaking of Eastern Europe.
Bernhard notes that histories of the Axis have tended to concentrate on the failures of the alliance, riven as it was by “deep-seated nationalist and racial resentments.” But Italian colonialism was something else. Before Mussolini, Italy had established colonies in Eritrea (1882), Somalia (1889), and two provinces of Libya (1911). The fascist regime, however, envisioned a new form of “settler colonialism that was totally directed from above and which foresaw the transfer of millions of colonists.”
From the fascist take-over to 1931, Italy waged “an extremely brutal colonial war” in North Africa, killing an estimated 10 percent of Libya’s population as it reasserted Italian control there. Abyssinia was similarly reduced by a brutal war in 1936.
Millions of Italians were supposed to colonize these African possessions. Geologists, agronomists, and ethnologists preceded them to find ideal locations for towns and farms. Some 20,000 colonists were sent to Libya in 1938 to great fanfare, accompanied by the German Labor Attaché. Forty towns on strict grid patterns, with social and party centers, were laid out. Each family had everything it needed to start a new life, provided with “a modern and completely standardized house with running water, furniture, workhorses, and food supplies for the first weeks.”
By 1940, there were 40,000 colonialists. “Life and work in these settler towns had more features of totalitarianism than in the Italian mainland,” writes Bernhard. In 1943, when Britain and France took control of Italy’s Libyan colonies, there were 150,000 Italians in North Africa.
“Colonial scientists” drew up plans for a “demographic colonization.” Settlement policy was to increase Italy’s birth rate and put a stop to the rural exodus to cities in Italy itself. The plan was nakedly eugenic: it was supposed to “improve” the Italian population, creating “a new breed of Italian,” the New Man of Fascism: an army of “soldier-peasants” who would expand and defend the Empire through Egypt, Sudan, and the Horn of Africa. Italians from various parts of the peninsula would mix, breaking the traditional identifications (and enmities) of Italy’s regions.
Fascist propaganda about the Libyan project hooked some British and American commentators, but it was the Nazis that were really enraptured. Africa italiana and the permanent mobilization of the Italian population came to serve “as a prism through with the Germans entertained their own visions of empire” in eastern Europe. Heinrich Himmler, Rudolf Hess, Robert Ley, and Herman Göring all visited North Africa before the formal beginning of the Second World War in 1939.
Based on the Italian model, the Nazi Generalplan Ost called for the settlement of 16 million in a racially purged eastern Europe. Plans for German towns-to-be even included central squares on the Italian model; such squares were not characteristic of German town-planning.
Of course, the Nazi settlement of “Aryans” in the Ost never happened. The invasion of the Soviet Union—which cost Italy some 84,000 dead—was defeated at Stalingrad in early 1943. Only the preliminary act of the Lebensraum fantasy to reshape an entire continent’s population, the extermination of Jews and others considered undesirable, had gotten underway before the Nazis were defeated in 1945.
Although Nazi colonization plans came to nothing, Germans whose ancestors had migrated east beginning in the reign of the Holy Roman Empire, centuries before the Nazi madness, paid a great price for it. At least twelve million ethnic Germans were expelled from, or fled, eastern and central Europe between 1944–1950.
Support JSTOR Daily! Join our new membership program on Patreon today. |
UFCW members working in packing and processing know that we have the ability to influence wages, benefits and working conditions when we stick together and make our voices heard at work. There is real power in numbers and in solidarity. And, when union workers raise the working and living standards in their community, other businesses follow suit. That’s why the more engaged union members there are in this country, the better off everyone is. Throughout our history, when unions are strong and workers are united, wages go up, health care coverage improves and pensions are strengthened.
History shows that the more unionized meatpacking workers there are, and the more we stick together in solidarity—the more likely it is we can raise wages and conditions for ourselves—and across the packing, poultry, and food processing industries.
A Proud History
It’s been more than 100 years since Upton Sinclair’s 1906 novel The Jungle exposed the terrible working conditions and unsanitary practices going on inside America’s meatpacking plants. The book shocked the nation and that year the Pure Food and Drug Act and the Federal Meat Inspection Act were passed to protect consumers from unsafe meat. Packinghouse workers, however, would have to protect themselves. Workers on the kill and processing floors of plants around the country began uniting in unions to raise their pay and working conditions.
During the 1920s, black workers began entering packinghouses and earning skilled positions as butchers on the killing floors. During the early 1930s—and thanks in part to the New Deal’s pro-labor policies—black, white, and immigrant workers of all backgrounds took the lead organizing packinghouse workers in Chicago. These workers overcame ethnic and racial tensions in meatpacking plants that had kept workers divided and unable unite at the bargaining table.
United Packinghouse Workers of America (UPWA) was formed in 1943. Because of their large, active, and committed membership, UPWA was able to wield real power at the bargaining table. Through their solidarity, the workers of the UPWA were able to successfully bargain for increased wages and better working conditions. And they were able to use their tremendous power to benefit our entire society.
Not many people at that time believed that equal pay for black workers was possible—but unionized packinghouse workers had equality written into their contracts. And, talking about pay equity for women did not become politically correct until the 1970s, but packinghouse workers had it written in their contracts in the 1950s. It was a union ahead of its time—regardless of color; sex, or immigrant status, union meatpackers got equal pay for equal work. These meatpackers build a strong, powerful union that would defend their interests as workers and defend their civil rights as well—a tradition that the UFCW is proud to carry on today.
A Changing Industry
In the 1980s the meatpacking industry began to experience major change. Business in the railroad stockyards and city packinghouses declined rapidly. Instead, packing plants arose in rural areas near livestock feedlots. These new plants were equipped with power saws and mechanical knives for a more efficient “disassembly line”. New companies like Iowa Beef Processors (IBP) used financial, technical and engineering power to change the face of the industry. They competed with other companies by increasing worker speed and productivity while cutting labor costs. Other companies either followed suit, or lost out. Small, local and regional companies closed or were bought out by giants like Tyson and Smithfield—and these companies grew into industry leaders. Now, five mega-corporations control more than 80% of the market.
These big, powerful companies continued to increase production speed, increasing the hazards for workers. Companies moved closed union plants and moved operations to states with right-to-work laws that made it difficult for workers to organize themselves into unions and fight for safer line speeds or wage increases. Workers who did seek to organize were met with employer resistance in the form of intimidation.
Today, workers have lost power at the bargaining table. Giant meatpacking and food companies are more determined than ever to keep labor costs as low as possible and production as high as possible. This means hiring cheap labor, maintaining intolerably high line speeds, demanding cuts in wages and benefits from unionized facilities. Many companies actively discourage workers from forming unions. In fact, a recent study by American Rights at Work revealed that 25% of employers fire at least one pro-union worker during worker organizing campaigns.
Other companies actively exploit our broken immigration system, purposely recruiting and hiring undocumented immigrants to create a disposable workforce. These immigrants often don’t speak English and aren’t aware of labor laws or their rights on the job. It’s a vulnerable, easily-intimidated workforce too afraid to speak out when their paychecks aren’t right, when working conditions are not safe or even when there’s a potential problem with the food they’re producing.
This has resulted in an industry where workers have less bargaining power, where it’s becoming harder and harder to earn enough to support families, and where it’s becoming less safe to work. In early 2005, Human Rights Watch released a report entitled “Blood, Sweat, and Fear: Workers’ Rights in U.S. Meat and Poultry Plants,” which concluded that the working conditions in many of America’s meat packing plants violated basic human and worker rights. This was the first time the human rights organization had criticized a single a U.S. industry.
There’s still power in numbers
What the UPWA knew back in the 1930s still holds true today. Workers have the ability to influence their wages, benefits and working conditions when they’re unionized. There is real power in numbers and in solidarity. In fact, union meatpackers earn 15% higher wages than non-union meatpackers. 81% of union workers have job-related health coverage, while only 50% of non-union workers do—and union families pay 43% less for family coverage than nonunion families. It’s called the “union premium” And, when union workers raise the working and living standards in their community, other businesses follow suit. That’s why the more union members there are in this country, the better off everyone is. Throughout our history, when unions are strong, wages go up, health care coverage improves and pensions are strengthened. When unions are under attack, as they are today, we are all in danger – our jobs, our communities and our families.
We can build our union for power.
History shows that the more unionized meatpacking workers there are, and the more we stick together in solidarity—the more likely it is we can raise wages and conditions for ourselves—and across the packing, poultry, and food processing industries. Think about this—for those of us who work in a “Right to Work” state, our power at the bargaining table is measured by the number of union members in our plant. The fact is that workers in plants with more union members earn more money.
For those of us who work in a union shop state, our power at the bargaining table is determined by how many plants in our area is union vs. non-union. Each time we go to negotiate our contract, our company points to the non-union plant down the road—or across the state line—as competition.
If those plants were union, it would be a completely different story.
For example, Tyson workers have a union in 25 plants—but workers in another 45 plants don’t have union representation. Workers in those non-union plants don’t want to make less money, earn fewer benefits, or work in unsafe working conditions. But they do, because they don’t have a union. Sometimes, workers don’t have a union because their employer actively tries to keep the union out. Other workers haven’t tried to organize their plant because they simply don’t know the benefits.
If unionized meatpacking workers came together to organize those plants, we could really build power at the bargaining table. We could raise wages and working conditions for food workers across the whole industry. Together, we can do what our predecessors once did.
By uniting in unions we could:
- Advocate for a reasonable pace of production and line speed, and ensure proper staffing to reduce on-the-job injuries.
- Improve food safety by providing support for whistleblowers
- Negotiate wages and benefits that pay the bills, let us raise our families, put our kids through school, and live the American Dream.
- Stop abusive, exploitative, and disrespectful treatment of workers by management.
- Take pride in our jobs, knowing we have a voice and a say in the decisions that impact us. |
Milestones in Black History
African Americans and their contributions to American society and culture are honored each February with Black History Month. Since arriving in America in 1619 as slaves, African Americans have fought for their independence and to be seen as equals. These struggles have produced many historical icons and events that make all Americans proud.
Events such as the Emancipation Proclamation, which gave slaves their freedom, and the Civil Rights Movement often divided the country, as well as individuals such as Booker T. Washington, a former slave who became a great educator, and Sojourner Truth, who was not only an abolitionist, but also a women’s rights activist, eventually brought all Americans together to a place of greater peace, understanding, and acceptance of each other. One of the most recent historical events in black history is the election of the first African American president, Barack Obama.
Below is a chronological list of many events that shaped black history and some information about the brave men and women who led the way for today’s generation:
Nat Turner Slave Revolt, 1831
- The Confessions of Nat Turner – A publication written by Thomas R. Gray, who spoke with Nat Turner after he was arrested for the 1831 revolt against slavery.
- Frederick Douglass – A biography of the great abolitionist, editor, orator, author, reformer, statesman, minister, and women’s suffragist.
The Emancipation Proclamation, 1862 & 1863
Underground Railroad, 1850-1860
- Underground Railroad – Learn more about the network of routes and abolitionist safe houses that helped slaves escape captivity.
Booker T. Washington
NAACP founded, 1909
- NAACP Official Site – The NAACP is one of the oldest U.S. organizations to fight for civil rights.
The Scottsboro Boys, 1931
- Part 1 and Part 2 – View videos about the Scottsboro Boys case.
The Tuskegee Airmen, 1941
- Tuskegee Airmen, Inc. – The official site of the brave men who became the first African American military aviators.
- Tuskegee Airmen – Information about the famous fighters at the National Museum of the U.S. Air Force.
This Is Our War
- The Crisis – Read an article about W.E.B. Dubois’ July 1918 call that black be allowed to fight in World War I, saying “If this is our country, then this is our war.”
- Jackie Robinson – A biography of Jackie Robinson, the first African American player in Major League Baseball.
Rosa Parks and the Montgomery Bus Boycott, December 1955
- Rosa Parks – An NPR article about the death of Parks in 2005 and excerpts of an interview with her in 1992 about the bus incident.
Integration of Central High School, 1957
- The Legacy of Little Rock – Little Rock’s Central High School was the site of the first forced school desegregation by the Little Rock nine, nine African American students, in 1957.
Martin Luther King, Jr.
- “I Have a Dream” – Audio from the August 28, 1963 speech Martin Luther King, Jr. gave on the steps of the Lincoln Memorial.
- The King Center – A center dedicated to advancing the philosophies and legacy of King.
The Black Panther Party, 1966
Thurgood Marshall, 1966
Million Man March, 1995
Obama Election, 2008
- Election Reaction Photos – Americans react to the election of African American Barack Obama as the 44th president of the U.S.
- Barack Obama – The official website of the President of the U.S. |
An ear examination is a thorough evaluation of the ears that is done to screen for ear problems, such as hearing loss, ear pain, discharge, lumps, or objects in the ear. An ear examination can detect problems in the ear canal, eardrum, and the middle ear, such as infection, excessive earwax, or an object like a bean or a bead.
During an ear examination, an instrument called an otoscope is used to look at the outer ear canal and eardrum. An otoscope is a handheld instrument with a light, a magnifying lens, and a funnel-shaped viewing piece with a narrow, pointed end called a speculum. A pneumatic otoscope has a rubber bulb that your health professional can squeeze to give a puff of air into the ear canal. This allows your health professional to see how the eardrum responds.
Why It Is Done
An ear examination may be done:
- As part of a routine physical examination.
- To screen babies and children for hearing loss.
- To determine the cause of symptoms such as earache, a feeling of pressure or fullness in the ear, or hearing loss.
- To check for excess wax buildup or an object in the ear canal.
- To detect the location of an ear infection. The infection may involve only the external ear canal (otitis externa) or the middle ear behind the eardrum (otitis media).
- To monitor the effectiveness of treatment for an ear problem.
How To Prepare
It is important to sit very still during an ear examination. A young child should be lying down with his or her head turned to the side or sitting on the lap of an adult with the child's head resting securely on the adult's chest. Older children and adults can sit with the head tilted slightly toward the opposite shoulder.
Your health professional may need to remove earwax in order to see the eardrum.
How It Is Done
An ear examination can be done in a health professional's office, a school, or the workplace.
For an ear examination, the health professional uses a special instrument called an otoscope to look into the ear canal and see the eardrum.
Your health professional will gently pull your ear back and slightly up to straighten the ear canal. If a baby under 12 months is being examined, the ear will be pulled downward and out to straighten the ear canal. The health professional will then insert the pointed end (speculum) of the otoscope into your ear and gently move the speculum through the middle of your ear canal to avoid irritating the canal lining. The health professional will look at each eardrum (tympanic membrane).
Using a pneumatic otoscope lets your health professional see what the eardrum looks like and how well it moves when the pressure inside the ear canal is changed. It helps your health professional determine if there is a problem with the eustachian tube or fluid behind the eardrum (otitis media with effusion). A normal eardrum will flex inward and outward in response to the changes in pressure.
How It Feels
The physical examination of the ear using an otoscope is usually painless. If you have an ear infection, inserting the otoscope into the ear canal may cause some pain or mild discomfort.
The pointed end of the otoscope can irritate the lining of the ear canal, but this can usually be avoided by inserting the otoscope slowly and carefully. If the otoscope does scrape the lining of the ear canal, it rarely causes bleeding or infection.
An ear examination is a thorough evaluation of the ears that is done to screen for ear problems, such as ear pain, discharge, lumps, or objects in the ear.
|Normal:||Ear canals vary in size, shape, and color.|
|The ear canal is skin-colored and lined with small hairs and usually some yellowish brown earwax.|
|The eardrum is normally pearly white or light gray, and you can see through it. Also, one of the tiny bones in the middle ear can be seen.|
|The eardrum moves slightly when a puff of air is blown into the ear.|
|Abnormal:||Touching, wiggling, or pulling on your outer ear causes pain.|
|The ear canal is red, tender, swollen, or filled with yellowish green pus.|
|The eardrum is red and bulging or looks dull and slightly pulled inward (retracted).|
|Yellow, gray, or amber liquid or bubbles are seen behind the eardrum.|
|There is a hole in the eardrum (perforation) or whitish scars on the surface of the drum.|
|The eardrum does not move as it should when a puff of air is blown into the ear.|
What Affects the Test
Reasons you may not be able to have the test or why the results may not be helpful include:
- Earwax, dirt, or an object such as a bean or a bead hiding or blocking the eardrum (tympanic membrane) in the ear canal. Your health professional may need to clean the ear canal before examining the ear.
- Crying. A small child who is upset or crying may have red eardrums. This redness may be confused with an ear infection.
- The inability of some children to sit still during the examination.
What To Think About
- Other types of tests may be used to examine the
ear and evaluate hearing. These tests include:
- Acoustic immitance testing (tympanometry and acoustic reflex testing). This 2-minute to 3-minute test measures how well the middle ear relays sound. The soft tip of a small instrument is inserted into the ear canal and adjusted to achieve a tight seal. Sound and air pressure are then directed toward the eardrum. The test is not painful, but slight changes in pressure may be felt or the tone may be heard.
- Vestibular tests (falling and past-pointing tests). These tests can detect problems with areas of the inner ear that help control balance and coordination. During these tests, you will try to maintain your balance and coordination while moving your arms and legs in certain ways, standing on one foot, standing heel-to-toe, and performing other maneuvers with your eyes open and closed. Your health professional will make sure that you do not fall.
- If your child has repeat ear infections, your health professional may suggest that you buy a simple otoscope that is available for home use. For more information, see the topic Home Ear Examination.
Other Works Consulted
- American Academy of Pediatrics (2008). Recommendations for preventive pediatric health care. In Bright Futures: Guidelines for Health Supervision of Infants, Children, and Adolescents, 3rd ed., p. 591. Elk Grove Village, IL: American Academy of Pediatrics. Also available online: http://practice.aap.org/content.aspx?aid=1599&nodeID=4003.
|Editor||Kathleen M. Ariss, MS|
|Associate Editor||Tracy Landauer|
|Primary Medical Reviewer||Kathleen Romito, MD - Family Medicine|
|Specialist Medical Reviewer||Donald R. Mintz, MD - Otolaryngology|
|Last Updated||April 22, 2009| |
Everyday, children in class 1 and class 2 get time to check their work from the previous day and respond to the teachers' comments. We call this time Hammer Time.
At the beginning of the day, before the Maths session starts, children sit with their learning partner and address any mistakes or misconceptions they may have, from the previous day’s work. They may also have to answer a challenge question to develop or consolidate their knowledge. This enables children to assess their own work, and for them to work as support for their partner.
Children are also now making their own 'Green Pen' comments at the end of their work. Here, they assess what they have done in the lesson (what they found easy/hard, why, and what their next steps are to either improve or progress). This is really helpful for them and for us. We can see how the children feel about their learning and it is another means of communication and assessment. |
But these resilient insects, now found in terrestrial ecosystems the world over, apparently began to diversify only about 100 million years ago in concert with the flowering plants, the scientists say.
"This study integrates numerous fossil records and a large molecular data set to infer the evolutionary radiation of ants, which have deeper roots than we thought," said Chuck Lydeard, program director in NSF's Division of Environmental Biology, which funded the research.
The study was also supported by the Green Fund.
Led by biologists Corrie Moreau and Naomi Pierce of Harvard University, the researchers reconstructed the ant family tree using DNA sequencing of six genes from 139 representative ant genera, encompassing 19 of 20 ant subfamilies around the world.
"Ants are a dominant feature of nearly all terrestrial ecosystems, and yet we know surprisingly little about their evolutionary history: the major groupings of ants, how they are related to each other, and when and how they arose," said Moreau. "We now have a clear picture of how this extraordinarily dominant - in ecological terms - and successful - in evolutionary terms - group of insects originated and diversified."
Moreau, Pierce and colleagues used a "molecular clock" calibrated with 43 fossils distributed throughout the ant family tree to date key events in the evolution of ants, providing a well-supported estimate for the age of modern lineages. Their conclusion that modern-day ants arose 140 to 168 million years ago pushes back the origin of ants at least 40 million years earlier than had previously been believed based on estimates from the fossil record.
"Our results support the hypothesis that ants were able to capitalize on the ecological opportunities provided b |
Term Paper Categories
|Term Paper Title
|# of Words
|# of Pages (250 words per page double spaced)
Donatello (1386-1466) was a master of sculpture in bronze and marble and
was one of the greatest Italian Renaissance artists of his time.
A lot is known about his life and career but little is known about his
character and personality. He never married and seems to be a man of simple
tastes. Patrons often found him hard to deal with and he demanded a lot of
artistic freedom. The inscriptions and signatures on his works are among the
earliest examples of classical Roman lettering. He had a more detailed range of
knowledge of ancient sculpture than any other artist of his time. His work was
inspired by ancient visual examples which he often transformed, he was really
viewed as a realist but later research showed he was much more.
Early career. Donatello was the son of Niccolo di Betto Bardi, a
Florentine wool carder. It is not known how he started his career but probably
learned stone carving from one of the sculptors working for the cathedral of
Florence about 1400. Some time between 1404 and 1407 he became a member of the
workshop of Lorenzo Ghiberti who was a sculptor in bronze. Donatello's earliest
work was a marble statue of David. The "David" was originally made for the
cathedral but was moved in 1416 to the Palazzo Vecchio which is a city hall
where it long stood as a civic-patriotic symbol. From the sixteenth century on
it was eclipsed by the gigantic "David" of Michelangelo which served the same
purpose. Other of Donatello's early works which were still partly Gothic are the
impressive seated marble figure of St. John the Evangelist for the cathedral and
a wooden crucifix in the church of Sta. Croce.
The full power of Donatello first appeared in two marble statues, "St.
Mark" and "St. George" which were completed in 1415. "St. George" has been
replaced and is now in the Bargello. For the first time the human body is
rendered as a functional organism. The same qualities came in the series of five
prophet statues that Donatello did beginning in 1416. The statues were of
beardless and bearded prophets as well as a group of Abraham and Isaac in 1416-
1421 and also the "Zuccone" and "Jeremiah". "Zuccone" is famous as the finest of
the campanile statues and one of the artist's masterpieces.
Donatello invented his own bold new mode of relief in his marble panel "
St. George Killing The Dragon" (1416-1417). The technique involved shallow
carving throughout, which created a more striking effect than in his earlier
works. He no longer modelled his shapes but he seemed to "paint" them with his
Donatello continued to explore the possibilities of the new technique he
would use in his marble reliefs of the 1420's and early 1430's. The best of
these were " The Ascension, with Christ Giving the Keys to St. Peter," the "
Feast of Herod" (1433-1435), the large stucco roundels with scenes from the life
of St. John the Evangelist (1434-1437), and the dome of the old sacristy of S.
Lorenzo shows the same technique but with colour added.
Donatello had also become a major sculptor in bronze. His earliest work
of this was the more than life size statue of St. Louis (1423) which was
replaced half a century later. Donatello in partnership with Michelozzo helped
with fine bronze effigy on the tomb of the pope John XXIII in the baptistery,
the "Assumption of the Virgin" on the Brancacci tomb and the dancing angels on
the outdoor pulpit of the Prato Cathedral (1433-1438). His departure from the
standards of Brunelleschi did not go to well between the two old friends and was
never repaired. Brunelleschi even made epigrams against Donatello.
During his partnership with Michelozzo, Donatello made works of pure
sculpture, including several works of bronze. The earliest and most important of
these was the "Feast of Herod" (1423-1427). He also made two statuettes of
Virtues and then three nude child angels (one which was stolen and is now in the
Berlin museum). These statues prepared the way for the bronze sta...
Read entire document |
February 23, 2005
The whole sky is filled with a diffuse, high energy glow: the cosmic X-ray background. In the last years the astronomers could show, that this radiation can almost completely be associated with individual objects. Similarly, Galileo Galilei in the beginning of the 17th century resolved the light of the Milky Way into individual stars. The X-ray background originates in hundreds of millions of supermassive Black Holes, which feed from matter in the centres of distant galaxy systems. Because the Black Holes are accreting mass, we observe them in the X-ray background during their growth phase. In today's Universe, massive Black Holes are found in the centres of practically all nearby galaxies.
When matter rushes down the abyss of a Black Hole, it speeds around the cosmic maelstrom almost with the velocity of light and is heated up so strongly, that it emits its "last cry of help" in the form of high energy radiation, before it vanishes forever. Therefore the putatively invisible Black Holes are among the most luminous objects in the universe, if they are fed well in the centres of so called active galaxies. The chemical cal elements in the matter emit X-rays of a characteristic wavelength and can therefore be identified through their spectral fingerprint. Atoms of the element iron are a particularly useful diagnostic tool, because this metal is most abundant in the cosmos and radiates most intensely at high temperatures.
In a way similar to the radar traps, with which the police identifies speeding cars, the relativistic speeds of iron atoms circling the Black Hole can be measured through a shift in wavelength of their light. Through a combination of the effects predicted by Einstein's special and general theory of relativity, however, a characteristically broadened, asymmetric line profile, i.e. a smeared fingerprint is expected in the X-ray light of Black Holes. Special relativity postulates that moving clocks run slow, and general relativity predicts that clocks run slow in the vicinity of large masses. Both effects lead to a shift of the light emitted by iron atoms into the longer wavelength part of the electromagnetic spectrum. However, if we observe the matter circling in the so called "accretion disk" (Fig. 1) from the side, the light from atoms racing towards us appears shifted to shorter wavelengths and much brighter than that moving away from us. These effects of Relativity are stronger, the closer the matter reaches to the black hole. Because of the curved spacetime they are strongest in fast rotating Black Holes. In the past years, measurements of relativistic iron lines have been possible in a few nearby galaxies - for the first time in 1995 with the Japanese ASCA satellite.
Now the researchers around Günther Hasinger of the Max-Planck-Institute for extraterrestrial Physics, jointly with the group of Xavier Barcons at the Spanish Instituto de Física de Cantabria in Santander and Andy Fabian at the Institute of Astronomy in Cambridge, UK have uncovered the relativistically smeared fingerprint of iron atoms in the average X-ray light of about 100 distant Black Holes of the X-ray background (Fig. 2). The astrophysicists utilized the X-ray observatory XMM-Newton of the European Space Agency ESA. They pointed the instrument to a field in the Big Dipper constellation for more than 500 hours and discovered several hundred weak X-ray sources.
Because of the expansion of the Universe the galaxies move away from us with a speed increasing with their distance and thus their spectral lines all appear at different wavelength; the astronomers had first to correct the X-ray light of all objects into the rest frame of the Milky Way. The necessary distance measurements for more than 100 objects were obtained with the American Keck-Telescope. After having co-added the light from all objects, the researchers were very surprised about the unexpectedly large signal and the characteristically broadened shape of the iron line.
From the strength of the signal they deduced the fraction of iron atoms in the accreted matter. Surprisingly, the chemical abundance of iron in the "nutrition" of these relatively young Black Holes is about three times higher than in our Solar system, which had been created significantly later. The centres of galaxies in the early Universe therefore must have had a particularly efficient method to produce iron, possibly because violent star forming activity "breeds" the chemical elements rather quickly in active galaxies. The width of the line indicated that the iron atoms must radiate rather close to the black hole, consistent with rapidly spinning Black Holes. This conclusion is also found indirectly by other groups, who compared the energy in the X-ray background with the total mass of "dormant" Black Holes in nearby galaxies. |
This week we look at the last of three standards under “Key Ideas and Details” in the “Reading” section of the Common Core:
Standard 3: Analyze how complex characters (e.g., those with multiple or conflicting motivations) develop over the course of a text, interact with other characters, and advance the plot or develop the theme.
For students to understand how a character develops over the course of a text and how his or her interactions impact plot and theme, they first need to know where to begin. A good place to start is to help them get into the mind of the character. The idea is to help students take a character and, to borrow a quote from Atticus Finch, “climb into his skin and walk around in it.” The internet offers many ways to facilitate this creatively with technology. Here are a few ideas and sites that might work for you and for your class:
Profile Page—Have students create a “Facebook” profile page for a character. If your school blocks social networking sites, try using a profile page creating tool like the one at Read Write Think or use a template from Microsoft and have students post it on your class blog or send it to you attached to an email. For more information about these ideas, check out my post from 4/26/11.
Trading Cards—Have students create character “trading cards” at ReadWriteThink.org. This one might sound too immature for high school students, but you could be surprised that they’ll enjoy it. The key is to challenge them to be as creative as possible in their design.
Newspaper Interview—Once your students have successfully delved into the minds of their characters, you can create more complex assignments having them analyze actions, words, and motives. Although this is a good opportunity for traditional essay writing, you might also consider having them write a newspaper feature article with an interview of their character. Writesite.org is a good place for introducing your students to journalism.
However you decide to use technology to get your students inside the heads of their characters, I’m sure websites and tools like these ones can enrich the experience and will elevate the quality of your outcomes. |
After the success of the Daguerreotype, and as the interest in capturing and preserving images continuously rose, man sought to develop a way to produce multiple copies of the same image. And thus, the collodion process was born.
The collodion process, more commonly known as the wet plate process, was invented in 1851 by Frederic Scott Archer. It is a simple process of coating a clean glass plate with a special mixture of bromide, iodide, or chloride dissolved in collodion, and then placing the plate in a silver nitrate bath. The image is exposed upon the plate while it is wet, and then immediately set to develop.
The process proved to be a bit of a challenge at first, given that it should be done – from coating to developing — well before the glass plate dried, but it also became an advantage because this made it a much quicker process than making a daguerreotype. And since the image produced on the glass plate is a negative, it allowed people to create multiple copies of the same image.
These very reasons allowed the collodion process to completely replace the daguerreotype as the photographic process of choice by the end of the 1850s.
You might also like: |
Science Fair Project Encyclopedia
- This article is about the scientific study of extraterrestrial life, for treatment in popular culture, see Extraterrestrial life in popular culture.
Extraterrestrial life refers to forms of life that may exist and originate outside of the planet Earth. Extraterrestrial life is currently a hypothetical notion - there is as yet no evidence of the existence of extraterrestrial life that has been widely accepted by scientists.
Possible basis and origins of extraterrestrial life
All life on earth is based on carbon and water, and this could also be true of other life forms elsewhere in the universe. However, many people believe that elements other than carbon might be capable of providing a basis for life (See also: Carbon chauvinism). Silicon is usually considered the most likely alternative, though still improbable. Ammonia-based lifeforms are also considered, though less frequently.
The scientific study of the possible biochemical basis for extraterrestrial life is often called xenobiology.
Most scientists hold that if extraterrestial life exists, its evolution would have occurred independently in different places in the universe. An alternative hypothesis, held by a minority, is panspermia, which suggests that life in the universe could have stemmed from a single initial distribution of spores which provide the basis for living beings to develop. If true, this theory would suggest that life in various forms may exist throughout the universe.
Silicon-based life is regarded as improbable by most scientists. Superficially, the chemistries of carbon and silicon are similar; just as carbon can form methane (CH4), silicon can form silane (SiH4), and both elements can form long chains of polymers.
But silicon's affinity for oxygen means that it cannot easily be used for respiration. Whereas CO2 is a gas that can easily be removed from the organism, SiO2 is a solid that will instantly organize itself into lattices, making it hard to dispose of. On top of that, silicon fails to give rise to many compounds that exhibit chirality, which is a common feature of carbon-based molecules that are essential to the proper functioning of enzymes.
There is also astronomical evidence to suggest that silicon-based life is unlikely. Wherever astronomers have looked, they have failed to find the simplest precursors to silicon-based biochemistry. Complex carbon-based compounds are abundant in space, but in the case of silicon, most of what we have observed in space are simple oxides of silicon, with no record of more complex molecules such as silanes and silicones.
Most life on Earth is based on water and its numerous chemical properties, and indeed a large portion of modern chemistry is devoted to the study of aqueous solutions. However, numerous chemical reactions are possible in an ammonia solution, and liquid ammonia has some chemical similarities with water. Ammonia can dissolve most organic molecules at least as well as water does, and in addition it is capable of dissolving many elemental metals. Given this set of chemical properties it has been theorized that ammonia-based life forms might be possible.
On the other hand, ammonia does have some problems as a basis for life. The heat of vaporization of ammonia is half that of water and its surface tension three times smaller. This means that hydrogen bonds between ammonia molecules will always be much weaker than those in water, so ammonia is less able to concentrate non-polar molecules through a hydrophobic effect. Lacking this ability, mainstream science questions how well ammonia could hold prebiotic molecules together in order to allow the emergence of a self-reproducing system.
Beliefs in extraterrestrial life
Belief in extraterrestrial life may have been present in ancient Egypt, Babylon and Sumer. The first important Western thinker to hit upon the idea of inhabited worlds was the ancient Greek writer Thales and his student Anaximander in the 7th century BC. The atomists of Greece took up the idea, arguing that an infinite universe ought to have an infinity of populated worlds. The cosmology of Aristotle (which placed the Earth at the center of the universe) seemed to work against the idea of extraterrestrial life and when Christianity spread through the West the idea became a heresy. The best known pre-modern proponent of extra-solar planets and widespread life off Earth was Giordano Bruno who was burned at the stake for this and other unorthodox ideas in 1600.
At present, some enthusiasts in the topic believe that extraterrestrial beings regularly visit or have visited the Earth. Some think that unidentified flying objects observed in the skies are in fact sightings of the spacecraft of intelligent extraterrestrials, and even claim to have met such beings. Some also attribute crop circle patterns to the action of extraterrestrials.
While at least one recent scientific paper published in a respected, peer-reviewed journal has urged a reevaluation of the UFO phenomenon (Deardorff et al., 2005), as of this time mainstream scientific opinion holds that such claims are unsupportable by the evidence currently available and unlikely to be true.
The possible existence of primitive life outside of Earth is much less controversial to mainstream scientists, although at present no direct evidence of such extraterrestrial life has been found. Indirect evidence has been offered for the current existence of primitive life on the planet Mars; however, the conclusions that should be drawn from such evidence remains in debate.
Scientific search for extraterrestrial life
The scientific search for extraterrestrial life is being carried out in two very different ways, directly and indirectly.
Scientists are directly searching for evidence of unicellular life within the solar system, carrying out studies on the surface of Mars and examining meteors which have fallen to Earth. A mission is also proposed to Europa, one of Jupiter's moons with a liquid water layer under its surface, which might contain life.
There is some limited evidence that microbial life might possibly exist or have existed on Mars. An experiment on the Viking Mars lander reported gas emissions from heated Martian soil that some argue are consistent with the presence of microbes. However, the lack of corroborating evidence from other experiments on the Viking indicates that a non-biological reaction is a more likely hypothesis. And independently, in 1996, structures resembling bacteria were reportedly discovered in a meteorite known to be formed of rock ejected from Mars. Again, this report is vigorously disputed.
In February 2005, two NASA scientists reported that they had found strong evidence of present life on Mars (Berger, 2005). The two scientists, Carol Stoker and Larry Lemke of NASA’s Ames Research Center, based their claims on methane signatures found in Mars’ atmosphere that resemble the methane production of some forms of primitive life on Earth, as well as their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon denied the scientists’ claims, and Stoker herself backed off from her initial assertations (spacetoday.net, 2005). However, only a few days after Stoker and Lemke made their claims, scientists from the European Space Agency reported that their own measurements of methane on Mars suggested an organic origin (Michelson, 2005).
Though such findings are still very much in debate, support among scientists for the belief in the existence of life on Mars seems to be growing. In an informal survey of scientists attending the conference at which the European Space Agency presented its findings, 75 percent of the scientists at the conference reported to believe that life once existed on Mars; 25 percent reported a belief that life currently exists there (Michelson, 2005).
Extraterrestrial life in the Solar System
Bodies which have been suggested as mostly likely to contain such in the solar system in descending order. Of these, four are moons suggested to have large bodies of liquid, and life may have evolved there in a similar fashion to deep sea vents.
- Mars - The best known, and most earthlike of the other planets and moons in the Solar system.
- Titan - Only known moon with an atmosphere. Recently visited by the Huygens probe. May have ocean.
- Europa - May have ocean.
- Enceladus - May have liquid water beneath surface.
Numerous other bodies have been suggested, including hypothesised atmospheric life on Venus and the gas giants. Fred Hoyle also proposed microbial life may exist on comets. Some Earth microbes also managed to survive on a lunar probe for some years. It is considered highly unlikely that complex multicellular organisms exist in any of these places.
It is theorised that any technological society in space will be transmitting information. Projects such as SETI are conducting an astronomical search for radio activity that would confirm the presence of intelligent life.
Astronomers also search for extrasolar planets that would be conducive to life. Current radiodetection methods have been inadequate for such a search, as the resolution afforded by recent technology is inadequate for detailed study of extrasolar planetary objects. Future telescopes should be able to image planets around nearby stars, which may reveal the presence of life (either directly or through spectrography which would reveal key information such as the presence of free oxygen in a planet's atmosphere). It has been argued that one of the best candidates for the discovery of life-supporting planets may be Alpha Centauri, the closest star system to Earth.
Dealing with extraterrestrial life
If intelligent extraterrestrial life is found and we are able to communicate with it, the people of the world and their governments will need to determine how to manage those interactions. The development of policy guidelines for dealing with extraterrestrial beings and territory has been considered by authors such as Michael Salla and Alfred Webre and termed exopolitics.
- Anomalous phenomenon
- Drake equation
- Fermi paradox
- first contact
- Scientific skepticism
- Exopolitics.com by Alfred Webre
- Silicon-based life by David Darling
- Ammonia-based life by David Darling
- PBS: Life Beyond Earth a film by Timothy Ferris
- Berger, Brian (2005). Exclusive: NASA Researchers Claim Evidence of Present Life on Mars. Posted Feb. 16, 2005.
- spacetoday.net (2005). NASA denies Mars life reports. Posted Feb 19, 2005.
- Michelson, Marcel (2005). European Scientists Believe in Life on Mars. Posted Feb 25, 2005.
- John C. Baird . 1987. The Inner Limits of Outer Space : A Psychologist Critiques Our Efforts to Communicate With Extraterrestrial Beings. Hanover: University Press of New England. ISBN 0-87451-406-1
- Donald Goldsmith . 1997. The Hunt for Life on Mars . New York: A Dutton Book. ISBN 0525943366
- Michael T. Lemnick . 1998. Other Worlds: The Search for Life in the Universe. New York: A Touchstone Book.
- Cliff Pickover. 2003 The Science of Aliens New York: Basic Books. ISBN 0-465-07315-8
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details |
Photograph by Stefan Keller, Reuters
Published November 21, 2012
Mount Tongariro, situated in a remote part of the country's North Island, erupted for five minutes on November 21, spewing clouds of ash 2.5 miles (4 kilometers) high. In August, the 6,490-foot (1,978-meter) Tongariro had erupted for the first time since 1897. (Video: Volcano 101.)
Though the recent activity seems to have ebbed, scientists have predicted another eruption of similar size will occur in the next few weeks, according to the New Zealand Herald.
Several flights were canceled on New Zealand's North Island. Previous eruptions—notably of Iceland's Eyjafjallajökull volcano in 2010—have crippled air travel on a global scale.
"Very often volcanic ash contains microscopic fragments of volcanic glass ... and the turbine engines of commercial aircraft produce a level of heat sufficient to melt glass," Steven Miller, of the Cooperative Institute for Research in the Atmosphere at Colorado State University, told NASA's Earth Observatory in August.
"The ash can melt onto [airplane] turbine blades and other parts of the engine, causing damage and even engine stalls. It also presents hazards to pilot visibility, causing pits and frosting on the windshields in the same way that a sandstorm damages an automobile windshield."
These six scientists were snubbed for awards or robbed of credit for discoveries … because they were women.
Scientists can control the self-assembly of molecules to build nano-size flowers in the lab, a new study says.
Global warming is causing more extreme weather. But when it comes to tornadoes, it could go either way. |
What is a Codec? and How Do they Work?
A Codec refers to an algorithm or a program that is quite often embedded in a hardware piece like an IP Phone, ATA, etc. In case of VOIP, a codec can be used for converting voice signals into digital data and transmitted over any network or through the internet when a VOIP call is made. The term codec is actually used for a combination of two words. Namely: compressor/ decompressor or for coder/decoder. Codecs are essentially used for encoding / decoding, compression/ decompression or for encryption/ decryption. Let us now understand how this works.
Knowing about Encoding/Decoding
Normally, when we speak over a regular PSTN phone, our voice is transmitted the analogue way over the phone network. However, when you are talking over a VOIP, the voice gets converted into digital signals. It is this conversion that is known as encoding and can only a codec can achieve it. Once the encoding is done, it gets transmitted to its destination where it is decoded back into analogue form so that the person receiving the call is able to understand and hear clearly.
The Compression Stage
With bandwidth being scarce, most companies are looking at options to send more data at a time and enhance its overall performance. One way of doing this is by making sure that the data you send is lighter. This is where a codec is used to compress the digital data and make it less bulky. The compression process is quite complex where large data is stored in smaller spaces, otherwise known as digital bits. Normally, during the compression stage, the digital data is restricted to a packet or structure that is ideal to the compression algorithm. This compressed data is transmitted through the network and upon reaching its intended destination the data is decompressed to its original form so that it can be decoded. However, in most instances you may not need to decompress the data back to its original state since it is already available in a usable state.
Now, let us see how the codec works when it is used for encryption. If you want to achieve the highest possible security for your 0300 numbers or 0800 numbers or 0845 numbers or any other freephone numbers, encryption is probably the best tool available today. In the encryption process the data is converted into an indecipherable state so that even if it is intercepted by some unauthorized elements, what you send remains confidential. Only upon reaching the intended destination, this data can be decrypted into its original form.
Finally, a codec is an excellent option to transmit your data in a safe and secure manner.
So, you want a FREE number?
Test drive numberstore
Experience the power of Numberstore with a 30 Day subscription free trial of our Connect Plan. Choose from our complete range of business telephone numbers & take advantage of professional features such as custom announcements, mobile routing, online statistics and much more!Find Out More
- Non Geo Calls - implement Maximum Price first before a longer term solution says FCS
- 2 hour number porting blocked by major mobile operators
- A wonderful free service for UK charities
- Service Provider of the Year 2008
- FCS sets up Number Portability Working Group
- Ofcom announces delay to 0870 changes.
- eConsole® 2.7 Released
- 0870 Regulatory Changes
- UK Telecoms Regulation - Safety in Numbers?
- Numberstore cabs hit The Smoke
UK 0800 Numbers cost versus benefits
There are several benefits that any business can derive from 0800 numbers. One of the most significant is the likelihood of an increase in the number of callers who are potential customers. From a marketing perspective, these numbers are among the most efficient tools to attract new customers and implement effective advertising and marketing campaigns.
Memorable number? Why UK 0800 Numbers can be top of mind for your customer
Whether you plan a new venture or seek to boost your business, a powerful communication set up that can boost your advertising and marketing potential is essential. One of the most affordable ways to achieve this is by setting up a 0844 numbers. One of the common misconceptions is that these freephone numbers are expensive to run and are more a prerogative of larger organisations. However, that is far from the truth since there are plans from as low as £10 per month, suitable for the small business owner as well. In addition, you don't have to pay an exorbitant rate to pick up the cost of a call from a potential customer. It could cost as little as 2 pence per minute. You aren't bound to any long-term contract either.
Measuring success how UK 0800 Numbers can be used for effective advertising campaigns
One of the best ways to monitor consumer trends, stay in touch with customers and attract new ones is to set up a national number for your business. These numbers are now increasingly being used by businesses across the UK as an effective advertising and marketing tool. If you aren't already using freephone numbers, you are definitely missing out on a lot of opportunities for business growth. In addition, you give your business an edge through competitive advertising while callers enjoy the privilege of getting in touch with you for free. |
Scientists at Harvard's Wyss Institute have managed to recreate a pulmonary edema (a build-up of fluid in the lungs) inside a lung-on-a-chip. The team used techniques similar to those developed for microchip manufacturing to build the mechanical structure of a lung before lining it with human tissues. Air is passed through one side of the lung, while a liquid solution containing white blood cells mimics blood on the other side.
With a lung-on-a-chip, scientists are able to conduct research that would otherwise be dangerous, expensive, or ethically questionable. In earlier studies, the team showed that the white blood cells would attack any bacteria introduced into the lung (as demonstrated in the video below). The latest achievement involved adding fluid into the lung before alleviating it with a new drug from GlaxoSmithKline. The drug was later tested on animals, validating the results.
Creating a human-on-a-chip is the goal
In the future, organs-on-a-chip may negate the need for animal trials altogether. One issue with the current research is that the organs are treated in isolation, so scientists are unable to gauge if there are any negative effects on other parts of the body, or undertake research into diseases that affect multiple organs. That's where the concept of a human-on-a-chip comes in. By emulating all the vital organs in an interconnected system, scientists can test immune response as well as the efficiency of intravenous, respiratory, and oral drugs. They've already successfully simulated kidneys, hearts, lungs, blood vessels, and parts of the digestive tract, meaning we're not too far off from putting it all together. |
There are two formulas that are important to remember when considering vectors or positions in the 3D coordinate System. The midpoint formula and the distance formula in 3D. The midpoint and distance formula in 3D can be derived using a method of addition of the geometric representation of vectors. In order to understand the derivation of the distance formula in 3D we must understand 3D vector operations.
I want to derive the midpoint formula for 3 dimensions, the midpoint formula is going to help me find the midpoint between points a which is coordinates x1, y1 and z1 and b which has coordinates x2, y2 and z2. So I have segment ab drawn here and I've labeled my midpoint m and I'm hoping to find the formula for it's coordinates. I've also added a position vector oa for point a and a position vector om for point m. Now let's find the components for position vector oa, and let's recall that the components of a position vector are exactly the coordinates of the endpoint of that vector so there are going to be x1, y1, z1 and I'm also going to need vector ab in order to find m and what are the components of ab? Well since vector ab goes from point a to point b and the components are x2-x1, y2-y1 and z2-z1 okay how are we going to get position vector om from oa and ab?
Well let's make the observation that the vector that starts at point a and ends at m is half of the vector that goes from a to be, so this is vector ab this vector starting here and ending here is a half of ab the scalar of multiple ohe half of ab and so I need to add that to oa to get om. So vector om=oa plus one half a b, so that's going to be in components x1, y1, z1 plus one half of this, one half of x2-x1, y2-y1 and z2-z1. So let's see if we can combine this in a single step for the first component I'm going to get x1 plus a half x2 minus a half x1 so a half x1 plus a half x2 now I'll get y1 plus a half y2 minus a half y1 that's one half y1 plus one half y2 and similarly I get one half z1 plus one half z2.
Each of these is exactly the average of the x and y components of these 2 points, so I can write it as x1+x2 over 2 y1+y2 over 2 and z1+z2 over 2, these are the components of vector om which goes from the origin to point m and therefore the coordinates of point m are these. So the midpoint m of the segment joining x1, y1, z1 and x2, y2, z2 are x1+x2 over 2 y1+y2 over 2 and z1+z2 over 2 and that's the midpoint formula. |
International Ice Patrol in 88th year
The Titanic disaster spurred many maritime nations to examine the safety of their vessels on the open ocean. Since the Safety of Life at Sea treaty (SOLAS), which went into effect July 1, 1915, the U.S. government (through the Coast Guard and before that the Revenue Cutter Service) performs the operational duties of the International Ice Patrol with funding from international signatories. Since then, patrols have been conducted each year, with the exception of a brief period during World War II.
The region known as the Grand Banks of Newfoundland is of particular interest for several reasons. First, the great circle route connecting the U.S and Canada with Europe crosses right through this area. This means that there is a high volume of merchant vessels that need to cross this treacherous region. Second, the Grand Banks is home to very productive fishing grounds, which makes it especially attractive to commercial fisherman; this only serves to compound the high traffic density. Finally, the adverse environmental conditions (high winds, rough seas, and dense fog) make this locale even more dangerous.
Probably the most important environmental factor to consider is the dense fog that often occurs on and near the Grand Banks. This occurs when the southern flow of the Labrador Current joins the warm Gulf Stream waters at the tail of the Banks. As warm Gulf Stream winds flow over the cold Labrador Current, an advective fog forms that can last for many days. This dense blanket of fog severely limits visibility and restricts a vessel?s ability to maneuver. Furthermore, the upper-level jet stream frequently flows right over this region. As a result, low-pressure mid-latitude systems often move through, bringing severe weather with high winds and large seas.
The oceanographic structures in this region also contribute to the danger around the Grand Banks of Newfoundland. The principal contributors to this are the Labrador Current and the bathymetry. The Labrador Current is the main ocean current responsible for transporting icebergs into this region. It is a relatively fast-moving current that stays cold enough to carry icebergs all the way from the Labrador Sea and Baffin Bay to southern temperate waters. In fact, Titanic sank at the latitude of Providence, R.I. The bathymetry is also responsible for the transport of icebergs, but it has more impact on where the icebergs flow rather than how fast.
Due to the fact that the majority of an iceberg?s mass lies below the water, their track is greatly governed by sub-surface currents. The depths of these currents, like the Labrador, often dictate that they follow the 1,000-meter curve. The result of this is that icebergs commonly track through the gap between the Grand Banks and the Flemish Cap, affectionately termed "iceberg alley."
Due to the constant dangers in this area, the International Ice Patrol (IIP), operating out of Groton, Conn., maintains an ever-vigilant watch over the North Atlantic and reports the Limit of All Known Ice (LAKI) for the Grand Banks of Newfoundland and the surrounding area. Seasonal patrol dates have remained largely unchanged from year to year. Reconnaissance usually begins in late February and continues through July, but the exact dates vary from year to year as dictated by the distribution of icebergs. The longest season on record was in 1992, which lasted from February 7 to September 26, for a total of 203 days. Conversely, in 1999 the season never opened due to the fact that most icebergs were pushed west rather than south. Except during extreme years, the Grand Banks are generally clear of ice from August to February with the exception of a few stray icebergs.
Today the International Ice Patrol uses HC-130H Hercules aircraft, which can cover more than 2,000 nautical miles and fly for more than 12 hours. They use planes out of Elizabeth City, N.C., that are equipped with forward- and side-looking airborne radar for iceberg detection. Each flight covers an average of 30,000 square miles of ocean searching. Visual observations are conducted when conditions allow, but due to low cloud ceilings and the dense fog described earlier, good visibility exists about 30 percent of the time. During the main ice season, IIP's reconnaissance detachments deploy their aircraft to St. John?s, Newfoundland. |
When NASA launched the MErcury Surface, Space ENvironment, GEochemistry and Ranging satellite (or MESSENGER, demonstrating some truly hamfisted government acronym crafting) in 2004, Mercury has a pretty low profile in planetary science. For years, the innermost planet was assumed to have a quiet, boring life and was thought to be similar to our own moon. But after a mere six months in orbit around the planet, MESSENGER has produced enough information for a whopping seven papers published in the magazine Science that soundly dash those hum-drum expectations.
With this new information, the MESSENGER probe is forcing planetary scientists to reassess their assumptions about the planet’s volcanic history, geological processes, magnetic field, and overall composition. The principal investigator behind the project Sean Solomon, of the Carnegie Institute for Science, described it thusly: ”In the history of exploration of our planetary system, the first spacecraft to orbit a planet has always yielded stunning surprises, and MESSENGER has been true to that pattern.”
The first high-resolution imaging of Mercrury has yielded several surprises. Scientists found that the planet has a volcanically active past, with flood zones of dried lava covering 6% of the planet’s surface in huge, smooth plains. Most of the lava, which is tremendously thick in some areas, seems to have a come from a series of 16-mile long volcanic vents. It’s possible that this kind of volcanic activity could be analogous to the formation of our own planet, perhaps giving insight to a period of Earth’s history which is now difficult to investigate.
Imaing also shed new light on curious “hollows,” which are depressions on the planet’s surface that appear brighter than the surrounding surface and cyan in color. The hollows have been observed in previous flybys, but this is the first time scientists have had a chance to see them in such detail. Surprisingly, the hollows appear to be not only common across the planet, but some seem to be relatively young. This raises the possibility that they were created by a previously unknown geological process that is ongoing on Mercury.
Additionally, scientists were able to get a better idea of the composition of the planet and found far more potassium and sulfur than previously assumed. This is significant since these elements vaporize at relatively low temperatures, ruling out a hot and cataclysmic past for the planet — which many models for the planet’s formation had called for in order to explain Mercury’s unusually high mass.
Lastly, data from the MESSENGER probe gave researchers their first close up view of the planet’s magnetic field which is apparently completely unique in the solar system. It seems that the planet’s weak magnetic field does not create Van Allen Radiation Belts in the same manner as other planets, and it also has a magnetic equator much further north than was expected. The data also presented evidence of sodium auroras on the planet, from NASA:
The team found that sodium is the most important ion contributed by the planet. “We had previously observed neutral sodium from ground observations, but up close we’ve discovered that charged sodium particles are concentrated near Mercury’s polar regions where they are likely liberated by solar wind ion sputtering, effectively knocking sodium atoms off Mercury’s surface,” notes the University of Michigan’s Thomas Zurbuchen, author of one of the Science reports. “We were able to observe the formation process of these ions, one that is comparable to the manner by which auroras are generated in the Earth atmosphere near polar regions.”
Data from the planet also found helium ions in the magnetosphere, presumably created from interacting with the sun’s solar wind. The scientists also note that their observations have confirmed that the weak magnetic field of the planet provides little protection against the damaging effects of solar wind and extreme space weather.
Though currently seven years into its mission, the probe has only just begun studying Mercury. Given the incredible information pouring back from the probe already, we can only imagine what the coming months and years will tell us about the surprising planet closest to our sun. |
Test run for the arrival of the Dawn spacecraft in the asteroid belt
What might asteroid Vesta look like? In a new animation, researchers at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) have recreated the asteroid in 3D. In the animation, the asteroid is irregularly shaped, has a slight indentation at its South Pole and numerous impact craters. In July 2011, after a four year journey, NASA's Dawn spacecraft will reach the asteroid, which circles the Sun in the main asteroid belt between the orbits of Mars and Jupiter. This will be like taking a journey into the past because Vesta is a celestial body that has not changed much since the formation of the Solar System.
"It will be the first time that we get so close to such an ancient celestial body," says Ralf Jaumann from the DLR Institute of Planetary Research. "With Vesta, we have the opportunity to learn what happened when the planets were first formed from a cloud of dust." The asteroid was discovered by German astronomer Heinrich Olbers on 29 March 1807. Spectral measurements performed using ground-based telescopes suggest that the celestial body could consist of a firm crust of rocks of various compositions, a mantle and a core - the same as the Earth-like planets. Shortly after its formation 4.6 billion years ago, Vesta is suspected to have been completely molten. In the following 50 million years, the asteroid cooled down and the rocks separated according to their various densities, causing the heavier material to move towards the interior. "After this process, however, not much more happened on Vesta," explains Jaumann, a planetary geologist.
Pieces of asteroids found on Earth
Principally in the Sahara and the Antarctic, explorers have come across meteorites whose chemical compositions match the components of Vesta. This is what the spectral analyses of the meteorites and of Vesta suggest. "We are fairly sure that we have samples of Vesta here on Earth," says Jaumann. Planetary research scientists believe that, at some time in the past, another asteroid collided with Vesta, resulting in a 13-kilometre-deep crater on Vesta along with 50 new small asteroids, with numerous tiny fragments finding their way to Earth. So far, of the multitude of meteorites found on Earth, only a few can be classified as belonging to the Moon, Mars and Vesta; the origin of others remains uncertain. The fact that some samples can clearly be classified as originating from Vesta is a stroke of luck for Solar System research.
Bulges and indentations: a picture of Vesta
Vesta is particularly exciting for planetary researchers because it has changed little since its formation and has also spread its material as far as Earth. This is why NASA's Dawn spacecraft, powered by ion propulsion, is carrying three different instruments to the main asteroid belt between Mars and Jupiter. Alongside a mapping spectrometer from the Italian space agency (Agencia Spaziale Italia; ASI) and a gamma ray and neutron detector built by the Los Alamos National Laboratory, is a 'German built camera system, referred to as a 'framing camera'. In August, this camera will obtain images of the asteroid from orbit at a distance of about 2400 kilometres, returning data that will be processed at the DLR Institute of Planetary Research to produce a preliminary 3D terrain model. "Then we will slowly spiral down to an altitude of 660 kilometres," says DLR researcher Thomas Roatsch, responsible for planning and processing the 3D images of Vesta. "From there, we will obtain more detailed images at a resolution of 60 metres per pixel." Towards the end of its visit, Dawn will orbit Vesta at a distance of only 200 kilometres from its surface. During this phase, the gamma ray and neutron detector will determine its chemical composition and Vesta's gravity field will be determined using high-accuracy navigation to reveal the structure of Vesta's interior.
DLR researchers have been able to test their stereo imaging software using the animation of Vesta. "Admittedly, we have used this software for the Moon, Mars and Mercury, but each mission has its own peculiarities," says Roatsch. For the virtual rehearsal, the research scientists obtained simulated images of the asteroid's surface from Dawn's optical navigation lead, Nick Mastrodemos, at NASA's Jet Propulsion Laboratory. These were based on images acquired by the Hubble Space Telescope. With this material, Roatsch and his team calculated the likely shape of Vesta. However, it still took the DLR researchers several weeks for Vesta, with its bulges and indentations, to rotate in their 3D animation. At the same time, an American team from the Planetary Science Institute in Tucson, Arizona was working on a 3D model of Vesta, using the same database but a different method. There were only slight differences between the two terrain models. "We know that our data processing can achieve the required level of accuracy," says Roatsch.
A long journey to the 'wet' asteroid Ceres
Planetary scientists realise that, until now, these are only test runs for the actual mission. "We will not really know what Vesta looks like until Dawn reaches the asteroid," says Carol Raymond, a scientist at NASA's Jet Propulsion Laboratory and Deputy Principal Investigator for the Dawn mission. The spacecraft will orbit the asteroid for about a year, recording and analysing it as accurately as possible. DLR researchers hope to be able to map Vesta as completely as possible. But this will not be the end of the spacecraft's long journey; it will continue to visit the asteroid Ceres, very different from Vesta. Ceres is the largest asteroid discovered so far, orbiting the Sun at up to 450 million kilometres – further than Vesta. Under its thin outer crust, Ceres is thought to have a mantle of water ice and solidified volatiles, which makes up around 25 percent of its mass, surrounding a silicate core.The surface structure of the 'wet' asteroid is still unknown; it may have a thin atmosphere. In February 2015, Dawn will move into orbit around Ceres. "With the Dawn mission, we will get a picture of what happened in the first few million years after the formation of the planets," says Jaumann. "You could say that we are going back in time to the early Solar System".
The Dawn mission to Vesta and Ceres is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, for NASA's Science Mission Directorate, Washington. The University of California, Los Angeles, is responsible for overall Dawn mission science. The Dawn framing cameras have been developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by DLR German Aerospace Center, Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The Framing Camera project is funded by the Max Planck Society, DLR, and NASA/JPL. |
Amphibian Population Declines
An unparalleled diminishment in populations is occurring worldwide in many species of amphibians (frogs, toads, and salamanders). Although there are various causes for declining amphibian populations, the most obvious is habitat destruction. However, introduced exotic species, pathogens , pollution, and global environmental changes all contribute. Moreover, various factors can act together to produce adverse effects on amphibians.
Because amphibians are important predators and prey in many ecosystems , declines in their populations may affect many other species that live within the same ecological community. For example, populations of aquatic insects and amphibian predators such as snakes, birds, mammals, and fish may be especially affected by a loss in amphibians. Moreover, the populations of animals that amphibians eat, such as mosquitoes, may increase as amphibians disappear.
Water Quality Factors
Amphibians have permeable, exposed skin and eggs that may readily absorb toxic substances from the environment. Their eggs are laid in water or in moist areas, and their larvae (tadpoles) are aquatic. Because amphibians are intimately tied to an aquatic environment, the quality of the water in which they live can affect their growth, development, and survival. Because pollutants, waterborne pathogens, and global environmental changes can all affect water quality, these factors can in turn affect amphibians. Conversely, amphibians are important indicators of water quality, and are considered a sentinel species, meaning that what affects amphibians presently may affect other animal species in the future.
A number of studies have shown that acidification of fresh water (that is, a reduction in pH to acidic levels) via acid rain, acid snowmelt, or other modes of pollution are harmful to amphibian growth and development. Some species are more tolerant of acid conditions than others. Thus, depending on the species, the amount of acidity, and other environmental variables, amphibians may experience developmental deformities and increased mortality due to acidification.
Acidification can potentially affect amphibian populations and the communities in which amphibians live. For example, some populations of toads in Britain have probably been reduced by water acidification. Salamander populations in Colorado seem to have declined because of increased acidification during snowmelt. Several studies have shown that acidification of the water can affect competition and predation between amphibians. Thus, the larvae of some frog species may have increased survival rates under acid conditions because their salamander predators show reduced predation at low pH.
Nitrates and Nitrites.
Many chemical products used in agriculture and industry pollute aquatic habitats, causing potentially severe damage to ecosystems. For example, the increase in concentration of nitrate in surface water on agricultural land due to numerous sources may be hazardous to many species of fish, wildlife, and even humans. Data suggest that nitrogen-based fertilizers may be contributing to amphibian population declines in agricultural areas. However, some species appear to be more sensitive than others to nitrate and nitrite pollution.
In one experimental study in Oregon, it was shown that some species reduced their feeding activity, swam less vigorously, and showed disequilibrium when nitrate or nitrite ions were added to the water. Importantly, all species tested in this study showed high mortality at nitrite levels deemed safe for warm-water fishes by the U.S. Environmental Protection Agency. Furthermore, significant larval mortality occurred at the recommended limits of nitrite concentration for drinking water.
Just as amphibian species display variation in sensitivity to nitrate-related compounds, they also show variation in tolerance to other toxic substances that may be found in water. Insecticides such as organophosphates, carbamates, and synthetic pyrethroids, which are used mainly in crop production, have a wide array of effects on amphibians. Depending on the concentrations used and the species involved, some of these substances may be lethal, may affect growth and development, or may affect metamorphosis .
Effects of Ultraviolet Radiation
Global environmental changes may also affect amphibians. For example, ambient (natural) but increasing levels of ultraviolet (UV) radiation owing
The adverse effects of UV radiation can be enhanced in the presence of toxic substances and pathogens. For example, different species of amphibians show variation in sensitivity to aquatic pollutants known as polycyclic aromatic hydrocarbons (PAHs), which are found in locations contaminated with petroleum products or urban runoff. PAHs are extremely toxic to amphibians when they are simultaneously exposed to UV radiation. For example, one PAH, known as fluoranthene, causes increased mortality in salamanders and frogs as the amount of UV radiation increases.
UV radiation also increases amphibian mortality when a pathogenic fungus known as Saprolegnia is present. One major source of Saprolegnia is introduced stocked fish that become infected while being reared in hatcheries. It has recently been shown that when infected fish are released into natural lakes and ponds, Saprolegnia can be transmitted to amphibians. Other studies have shown that the adverse effects of UV on amphibians are enhanced when the water is acidic.
Malformations and Deformities
Water quality degradation has been linked to severe physical malformations (including missing, malformed, and extra limbs) reported in dozens of amphibian species from diverse aquatic habitats across North America.
One likely scenario for increased malformations is that trematode parasites that cause limb deformities in developing tadpoles have increased with their intermediate snail hosts. Snail populations may have increased with increased algal growth, their main food. In certain regions, lush algal growth may be occurring because of eutrophication of water from nitrogen-based fertilizer use on nearby lands.
Obviously, amphibians are being subjected to a variety of human-induced insults that are related to water quality. Special attention must be given to the presence of pollutants, pathogens, and global environmental changes that may affect amphibian growth and development, increase mortality, and eventually lead to unnatural and accelerated population declines.
SEE ALSO A CID R AIN ; C HEMICALS FROM A GRICULTURE ; E COLOGY , F RESH -W ATER ; F OREST H YDROLOGY ; F RESH W ATER , N ATURAL C OMPOSITION OF ; G LOBAL W ARMING : A ND THE H YDROLOGIC C YCLE ; H YDROLOGIC C YCLE ; L AKES : B IOLOGICAL P ROCESSES ; L AKES : C HEMICAL P ROCESSES ; P OLLUTION OF L AKES AND S TREAMS ; P OLLUTION S OURCES : P OINT AND N ONPOINT ; S TREAM H EALTH , A SSESSING .
Andrew R. Blaustein
Blaustein, Andrew R. et al. "Effects of Ultraviolet Radiation on Amphibians: FieldExperiments." American Zoologist 38 (1998):799–812.
Blaustein, Andrew R., and David B. Wake. "The Puzzle of Declining Amphibian Populations." Scientific American 272 (1998):52–57.
Boyer, Robin, and Christian E. Grue. "The Need for Water Quality Criteria forFrogs." Environmental Health Perspectives 103 (1995):352–357.
Johnson, Pieter T. et al. "Parasite ( Ribeiroia ondatrae ) Infection Linked to AmphibianMalformations in the Western United States." Ecological Monographs 72 (2002): 151–168.
Stebbins, Robert C., and Nathan W. Cohen. A Natural History of Amphibians. Princeton, NJ: Princeton University Press, 1995.
Amphibian Declines and Deformities. U.S. Geological Survey. <http://www.usgs.gov/amphibians.html> .
THE CASE OF THE CASCADES FROG
The Cascades frog ( Rana cascadae ) is a species that is threatened throughout its range in the western United States. Populations are disappearing, and eggs are dying as they are laid in lakes and ponds.
Cascades frogs are sensitive to a number of agents associated with water quality. For example, an experimental laboratory study at Oregon State University showed that survival and activity levels of tadpoles of the Cascades frog are greatly affected by ultraviolet radiation, acid water conditions, and nitrate pollution. These stressors, acting together, reduce survival and activity levels in Cascades frog tadpoles. |
Derivation of Cells
Human beings begin life as a single, newly fertilized cell. Like every cell that contains a nucleus, the fertilized cell holds all the instructions for its growth and development. The characteristics common to all living cells include the ability to reproduce, exchange gases, move, react to external stimuli, and create or utilize energy to perform their tasks.
Shortly after the ovum or egg is fertilized, it divides to form two cells. These two cells then divide to form a total of four, which again divide to form eight and continues on. This group of cells continues dividing; after nine days it attaches to the wall of the uterus and becomes an embryo.
About two weeks after conception, the cells of the embryo continue to divide, changing their shape and structure. This process is known as differentiation. The cells arrange into distinct layers called germ layers: an outer ectoderm and inner endoderm (entoderm). A third embryonic layer, the mesoderm, develops between the ectoderm and the endoderm. All the organs of the body develop or differentiate in an orderly fashion from these three primary germ layers. |
conscienceArticle Free Pass
conscience, a personal sense of the moral content of one’s own conduct, intentions, or character with regard to a feeling of obligation to do right or be good. Conscience, usually informed by acculturation and instruction, is thus generally understood to give intuitively authoritative judgments regarding the moral quality of single actions.
Historically, almost every culture has recognized the existence of such a faculty. Ancient Egyptians, for example, were urged not to transgress against the dictates of the heart, for one “must stand in fear of departing from its guidance.” In some belief systems, conscience is regarded as the voice of God and therefore a completely reliable guide of conduct: among the Hindus it is considered “the invisible God who dwells within us.” Among Western religious groups, the Society of Friends (or Quakers) places particular emphasis on the role of conscience in apprehending and responding through conduct to the “Inner Light” of God.
Outside the context of religion, philosophers, social scientists, and psychologists have sought to understand conscience in both its individual and universal aspects. The view that holds conscience to be an innate, intuitive faculty determining the perception of right and wrong is called intuitionism. The view that holds conscience to be a cumulative and subjective inference from past experience giving direction to future conduct is called empiricism. The behavioral scientist, on the other hand, may view the conscience as a set of learned responses to particular social stimuli. Another explanation of conscience was put forth in the 20th century by Sigmund Freud in his postulation of the superego. According to Freud, the superego is a major element of personality that is formed by the child’s incorporation of moral values through parental approval or punishment. The resulting internalized set of prohibitions, condemnations, and inhibitions is that part of the superego known as conscience.
What made you want to look up "conscience"? Please share what surprised you most... |
The Development of Self-Regulation in Young Children
Self-regulation has attracted the interest of both the general public and educators in recent years—most of the interest has been generated by stories of children who are "out of control." Self-regulation, often called self-control or self-direction, involves children's capacity for controlling emotions, interacting in positive ways with others, avoiding inappropriate or aggressive actions, and becoming an autonomous learner (Bronson, 2000, p. 32). Self-regulation is a psychosocial task that leads to an increasing sense of autonomy and initiative, so that by middle childhood, children can act independently as thinkers and playmates (Cooper, 2006).
Existing research suggests that the beginning of self-regulation is evident at birth and is immediately influenced by both individual temperament and environment. Kopp (1982) describes a developmental progression from control of arousal and sensory motor functions to a beginning ability to comply with the suggestions of others by the end of the first year. During ages 3 and 4, self-regulation becomes more sophisticated, and by 6 to 8 years, children are capable of deliberate action, planning ahead, and conscious control (Bronson, 2000, p. 33). Self-regulation can be studied or explained from those same perspectives we describe for children's development in the next chapter.
Because we know that self-regulation influences children's social competence and success in school, it is important to look for ways to support and encourage its development. Much of what you will hear as "developmentally appropriate practice" is what young children need for their development of self-regulation. The kinds of support vary with children's age, but all domains of their development will influence self-regulation. For infants, there needs to be recognizable patterns in their interactions, signals for the essential routines in their day (such as food, comfort, and sleep), and the opportunity to test their ability to control or affect the environment. For toddlers, opportunities for exploration and autonomy are necessary along with role models of appropriate behavior sequences. The support that language can provide to toddlers as they carry out simple requests and label their own actions is also critical. For preschoolers and kindergarten children, the essential opportunities are for more complex directions, clear sets of rules, skill-appropriate responsibilities, understandable consequences of their actions, and again, positive role models. Finally, for primary-grade children, opportunities for complex problem-solving strategies, individual choices, support for individual effort, and experiences with positive, trusting, and respectful adults are the supports needed for developing self-regulation.
For children from preschool to primary grades, Cooper (2006) suggests the use of literature with main characters who are "managing society's growing expectations of conformity and self-control". She reminds us that children's books have "historically helped children negotiate the social and emotional terrains of childhood", supporting their growing up (Russell, 2005).
© ______ 2008, Merrill, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher.
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- First Grade Sight Words List |
American Heritage® Dictionary of the English Language, Fourth Edition
- Either of two historical districts and former states of southern Germany. The Lower Palatinate is in southwest Germany between Luxembourg and the Rhine River; the Upper Palatinate is to the east in eastern Bavaria. They were once under the jurisdiction of the counts palatine, who became electors of the Holy Roman Empire in 1356 and were then known as electors palatine.
Century Dictionary and Cyclopedia
- n. The office or dignity of a palatine; the province or dominion of a palatine. Specifically [capitalized], in German history, formerly an electorate of the empire, consisting of the Lower or Rhine Palatinate, and the Upper Palatinate, whose capital was Amberg. Abont 1620 these were separated, the Upper Palatinate and the electoral vote passing to Bavaria. while a new electorate was created later for the Palatinate. In 1777 the two were reunited; in consequence of the treaties of Lunéville (1801) and of Paris (1814-15), Bavaria retained the Upper Palatinate and a portion of the Lower Palatinate west of the Rhine, while the remainder of the Lower Palatinate was divided among Baden, Hesse, Prussia, etc. The Bavarian portions now form the governmental districts of Palatinate and Upper Palatinate.
GNU Webster's 1913
- n. The province or seigniory of a palatine; the dignity of a palatine.
- n. Either of two regions in Germany, formerly divisions of the Holy Roman Empire; the Lower Palatinate or Rhine Palatinate is now within the Rhineland-Palatinate; the Upper Palatinate is now within Bavaria. It is usually referred to as
- v. obsolete To make a palatinate of.
- n. a territory in southwestern Germany formerly ruled by the counts palatine
- n. a territory under the jurisdiction of a count palatine
“Chief among these was the state known as the Palatinate, from the German word Pfalz, a name given generally to any district ruled by a count palatine.”
“The name Palatinate has since then been confirmed to that administrative district of Bavaria, which in ecclesiastical affairs forms the Bishopric of Speyer.”
“Neither the lands of the palatinate, nor those which Conrad had inherited, formed a compact whole; but by further acquisitions which Conrad made, the foundation was laid for the principality to which the name Palatinate has clung.”
“One of the main reasons that prompted Louis XIV to sue for peace and to abandon his claims on Lorraine and the Palatinate was the rapid physical decline of the inglorious Spanish monarch, Charles II, of whose enormous possessions the French king hoped by diplomacy and intrigue to secure valuable portions.”
“The Princesses of the Palatinate are our own cousins, and it seems very natural, surely, that he should have a cordial, cousinly regard for them.”
“The Palatinate was a territory bordering on Bohemia, of over four thousand square miles, and contained nearly seven hundred thousand inhabitants.”
“The Palatinate, which is not more than one-fifth of Holland, is of infinitely more natural value.”
Internet Archive: A General collection of the best and most interesting voyages and travels in all parts of the world [microform] : many of which are now first translated into English : digested on a new plan
“Under the "Palatinate" the development of the province now known as South Carolina was begun.”
The Great South; A Record of Journeys in Louisiana, Texas, the Indian Territory, Missouri, Arkansas, Mississippi, Alabama, Georgia, Florida, South Carolina, North Carolina, Kentucky, Tennessee, Virginia, West Virginia, and Maryland
“Most so-called “Pennyslvania Dutch” came from the mid-Rhine region, mostly the Palatinate, Swabia, Alsace (a Germanic region at the time, Louis XIV notwithstanding), and up the Rhine as far as Switzerland.”
“Now, with a series of important state elections scheduled this year in states such as Berlin, Baden Württemberg, Rhineland Palatinate, and possibly North Rhine Westphalia, opposition to private-equity property deals is growing again.”
Looking for tweets for Palatinate. |
The size of an individual gene or an organism's entire genome is often measured in base pairs because DNA is usually double-stranded. Hence, the number of total base pairs is equal to the number of nucleotides in one of the strands (with the exception of non-coding single-stranded regions of telomeres). The haploid human genome (23 chromosomes) is estimated to be about 3 billion base pairs long and to contain 20,000-25,000 distinct genes.
In case of single stranded DNA/RNA we talk about nucleotides, abbreviated nt (or knt, Mnt, Gnt), rather than base pairs, as they are not paired. For distinction between units of computer storage and bases kbp, Mbp, Gbp etc may be used for disambiguation.
The Centimorgan is also often used to imply distance along a chromosome, but the number of base-pairs it corresponds to varies widely. In the Human genome, it is about 1 million base pairs . .
Hydrogen bonding is the chemical mechanism that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. The GC base pair has three hydrogen bonds, whereas the AT base pair has only two; as a consequence, the GC pair is more stable.
The larger nucleobases, adenine and guanine, are members of a class of doubly-ringed chemical structures called purines; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of singly-ringed chemical structures called pyrimidines. Purines are only complementary with pyrimidines: pyrimidine-pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine-purine pairings are energetically unfavorable because the molecules are too close, leading to electrostatic repulsion. The only other possible pairings are GT and AC; these pairings are mismatches because the pattern of hydrogen donors and acceptors do not correspond. (It should be noted that the GU pairing, with two hydrogen bonds, does occur fairly often in RNA but rarely in DNA.)
Paired DNA and RNA molecules are comparatively stable at room temperature but the two nucleotide strands will separate above a melting point that is determined by the length of the molecules, the extent of mispairing (if any), and the GC content. Higher GC content results in higher melting temperatures; it is therefore unsurprising that the genomes of extremophile organisms such as Thermus thermophilus are particularly GC-rich. Conversely, regions of a genome that need to separate frequently - for example, the promoter regions for often-transcribed genes - are comparatively GC-poor (for example, see TATA box). GC content and melting temperature must also be taken into account when designing primers for PCR reactions.
Base stacking interactions between the pi orbitals of the bases' aromatic rings also contribute to stability, and again GC stacking interactions with adjacent bases tend to be more favorable. (Note, though, that a GC stacking interaction with the next base pair is geometrically different from a CG interaction.) Base stacking effects are especially important in the secondary structure of RNA; for example, RNA stem-loop structures are stabilized by base stacking in the loop region.
Chemical analogs of nucleotides can take the place of proper nucleotides and establish non-canonical base-pairing, leading to errors (mostly point mutations) in DNA replication and DNA transcription. One common mutagenic base analog is 5-bromouracil, which resembles thymine but can base-pair to guanine in its enol form.
Other chemicals, known as DNA intercalators, fit into the gap between adjacent bases on a single strand and induce frameshift mutations by "masquerading" as a base, causing the DNA replication machinery to skip or insert additional nucleotides at the intercalated site. Most intercalators are large polyaromatic compounds and are known or suspected carcinogens. Examples include ethidium bromide and acridine. |
Old City Hall, built between 1820 and 1850, was Washington City's first public building. It housed a court of law where trials of abolitionists and Underground Railroad participants occurred in the early 1820s. The American Convention for the Abolition of Slavery also met here in 1829. The building also housed the early Office of the Recorder of Deeds. Frederick Douglass worked here as U.S. marshal (1877-1881) and as recorder of deeds for the city (1881-1885).
Old City Hall was also the site for the only known instance of compensation of white slaveholders for the loss (by government emancipation) of African Americans they legally owned as property. Though most claimants were white, there is evidence that African Americans also sought compensation for family members whose titles they had purchased in order to keep them from being sold to whites. White slaveholders were granted compensation after enslaved African Americans were “first freed” by the DC Emancipation Act on April 16, 1862. With this act, African Americans here were legally freed, received a certificate of emancipation, and were eligible for payments of up to $100 if they emigrated to Haiti, Liberia, or other colonies outside of the United States. White Slaveholders who were loyal to the Union could receive compensation for up to $300 per enslaved person.
The all-white, three-member Emancipation Commission conducted compensation interviews here. African Americans participated in the process by testifying for and against white slaveholders seeking compensation. By the end of the compensation process, close to $1 million had been dispensed to white slaveholders.
Buildings, Old City Hall, Judiciary Square, Vertical Files, Historical Society of Washington, D.C.
Stanley Harrold, Subversives: Antislavery Community in Washington, D.C., 1828-1865 (Baton Rouge: Louisiana State University Press, 2003).
Allan Johnston, Surviving Freedom: The Black Community of Washington, D.C., 1860-1880 (New York: Garland Publishing, 1993).
National Archives, DC Emancipation Act, archives.gov/exhibit_hall/featured_documents/dc_emancipation_act/transcription.html
"The Heritage Trails which you create are such gifts to DC.
H Street NE will be enhanced immeasurably by the addition of its guiding signposts of the past and point us towards the future." |
Columbia University researchers said scientists are keeping a keen eye on Thwaites Glacier, which drains into west Antarctica's Amundsen Sea, for its potential to raise global sea levels as the planet warms. They noted that neighboring glaciers in the Amundsen region are rapidly thinning rapidly. This includes the Pine Island Glacier and the much larger Getz Ice Shelf.
They said that the new study published in Geophysical Research Letters is the latest to confirm the importance of seafloor topography in predicting how these glaciers will behave in the near future.
Scientists have seen that a rock feature off west Antarctica seems to be slowing the glacier's slide into the sea. The new study now connects that rock feature to a larger ridge, using geophysical data collected during flights over Thwaites Glacier in 2009 under NASA's Ice Bridge campaign.
The authors of the study said the newly discovered ridge is 700 meters tall and has two peaks - one that is now anchoring the glacier and another located farther off shore that held the glacier in place between 55 and 150 years ago.
That scientists have found that Thwaites Glacier is losing its grip on a previously unknown ridge has helped them to understand why the glacier appears to be moving faster than it used to.
A press release issued on the new study noted that in 2009, researchers sent a robot submarine beneath Pine Island Glacier's floating ice tongue and found a ridge that's about half the size of the one off Thwaites Glacier.
Researchers guessed that Pine Island Glacier lifted off that ridge in the 1970s, which allowed warm ocean currents to melt the glacier from below.
Earlier this year, Lamont-Doherty oceanographer Stan Jacobs and colleagues noted in a study in Nature Geoscience that the glacier's ice shelf is now moving 50 percent faster than it was in the early 1990s. Pine Island Glacier is moving into the sea at the rate of 4 kilometers a year - four times faster than the fastest-moving section of Thwaites, according to a press release.
Knowing the ridge is there lets us understand why the wide ice tongue that used to be in front of the glacier has broken up, Lamont-Doherty geophysicist Robin Bell, study co-author, said via press release. We can now predict when the last bit of floating ice will lift off the ridge. We expect more ice will come streaming out of the Thwaites Glacier when this happens.
Bell added that ridges like this one and the one discovered in front of Pine Island Glacier stabilize ice sheets, but can also be a critical part of the destabilizing process. |
Geometry materials developed for the Special School District. These materials can be used for high school students, and possibly for some middle school students depending on their developmental level.
The materials were developed to adhere as much as possible with the NCTM Standards. The materials were developed with a student in mind who had one semester of algebra. The materials covered there include some coordinate geometry, thereby leaving more time to focus on the other standards.
In grades 9–12 all students should :
Analyze characteristics and properties of two- and three-dimensional geometric shapes and develop mathematical arguments about geometric relationships
- 1. analyze properties and determine attributes of two- and three-dimensional objects;
- 2. explore relationships (including congruence and similarity) among classes of two- and three-dimensional geometric objects, make and test conjectures about them, and solve problems involving them;
- 3. establish the validity of geometric conjectures using deduction, prove theorems, and critique arguments made by others;
- 4. use trigonometric relationships to determine lengths and angle measures.
Specify locations and describe spatial relationships using coordinate geometry and other representational systems
- 5. use Cartesian coordinates and other coordinate systems, such as navigational, polar, or spherical systems, to analyze geometric situations;
- 6. investigate conjectures and solve problems involving two- and three-dimensional objects represented with Cartesian coordinates.
Apply transformations and use symmetry to analyze mathematical situations
- 7. understand and represent translations, reflections, rotations, and dilations of objects in the plane by using sketches, coordinates, vectors, function notation, and matrices;
- 8. use various representations to help understand the effects of simple transformations and their compositions.
Use visualization, spatial reasoning, and geometric modeling to solve problems
- 9. draw and construct representations of two- and three-dimensional geometric objects using a variety of tools;
- 10. visualize three-dimensional objects and spaces from different perspectives and analyze their cross sections;
- 11. use vertex-edge graphs to model and solve problems;
- 12. use geometric models to gain insights into, and answer questions in, other areas of mathematics;
- 13. use geometric ideas to solve problems in, and gain insights into, other disciplines and other areas of interest such as art and architecture.
<ref>tags exist, but no
<references/>tag was found
Pages in category "SSD"
The following 10 pages are in this category, out of 10 total. |
We can calculate the volume of any prism simply by knowing the height of the prism and the area of one of its bases. When calculating prism volume, this volume formula can be applied to both right and oblique prisms with bases of any shape, such as triangles, quadrilaterals, or other polygons. A prism volume is a measurement of the space occupied by such solid.
If you want to calculate the volume of any prism, there is only two things that you need to know: One, what is the height of that prism, and two what is the area of one of your bases. So I'm going to shade in our bottom base here, and I'm going to label this as capital B. So when I write my volume formula, I'm going to say the volume, "V" of this prism, is equal to its base area times its capital H, its height. Where capital B is your base area and capital H is the height of the prism.
So the reason why this formula is useful is because you might have a triangular prism, a trapezoidal prism, a hexagonal prism. This formula will work no matter what kind of prism you have.
So whatever your base area is, and I guess I should write base area, you're going to substitute in that formula. So if this was a trapezoid, then you would substitute in B1 plus B2 times H, all divided by two. And that's how you would calculate your base area.
If you had, let's say, a regular hexagon, you're going to use apothem times side length times the number of sides, divided by two. So this way, this formula volume, equals base area times height, can be applied to any kind of prism.
One other thing, this is a right prism. If you had an oblique prism, as long as you know the height, and you can calculate its base area, that will be the same. You can use the same formula. It works for right prisms and oblique prisms. |
Can animals perceive magnetic fields? This question has intrigued biologists and others. Our eyes, of course, are simply antennas capable of detecting particularly useful frequencies of electromagnetic waves, or light. Why should animals not also possess magnetic receptors somehow tuned to our Earth’s magnetic field?
Researchers at the Baylor College of Medicine in Houston, led by Dr. J. David Dickman, have taken steps to answer this question in the affirmative. They concentrated their research on pigeons, which have long been suspected of having magnetic perception to aid their navigation. By examining neural activity in the brain stems of pigeons, Dr. Dickman and Dr. Le-Qing Wu were able to correlate the birds’ neural activity to a changing magnetic environment, thus demonstrating that the birds were processing a magnetic signal. A report describing their results appeared online on April 26, 2012 in Science Express.
Drs. Dickman and Wu were also able to correlate the rate of neuron firing to different orientations of the applied magnetic field. This is an effective proof that the birds are not only aware of the direction of magnetic north, but also their latitude as the up/down orientation of Earth’s magnetic field changes as one travels north or south.
Yet a big question remains. What is the mechanism by which these birds and other animals might receive magnetic signals? This question is the subject of debate. A diverse group of animals, ranging from turtles, to birds, to newts and lobsters, have been identified as having magnetic perception from behavioral studies. These studies generally involve the subject placed in a controllable magnetic field and noting how their behavior changes as the field changes. Pulling from such a diverse group of animals increases the difficulties of identifying a common mechanism for magneto-perception, if one exists at all.
Another difficulty in identifying how these fields are initially received by the animal is that magnetic fields permeate our bodies. They are in no way blocked from the interior of our bodies by skin like other signals animals receive such as light, smells, and tactile sensations. Therefore, magnetic field receptors could be located anywhere in their bodies, not just on their exteriors, for example, their eyes.
A few ideas have been proposed. One that applies to animals constantly on the move, such as fish, is the possibility of electromagnetic induction. Faraday’s Law, one of the laws governing electric and magnetic forces, states that magnetic fields passing through a circuit will produce a voltage and current through that circuit. This could be a mechanism animals use to detect magnetic fields.
Another possibility is that animals possess small samples of magnetite, Fe3O4, a naturally occurring magnetic ore. As a magnetic field is applied to magnetite, it will twist around to align itself in that field just as a compass does. It is possible that the ore is attached to tiny hairs similar to the ones found in our ear and as the ore tugs on the hairs, an signal is sent through the nervous system.
Finally, there are some chemical reactions that become favorable under the application of magnetic fields. These reactions could be used to discern directionality of applied magnetic fields.
Dickman and Wu’s study represents one of the first neurological studies of magnetic perception. They placed electrolytic lesions, basically a conductor connected to a voltmeter, to different locations within the brain stem of the pigeons. This allowed them to monitor not only which areas of the brain stem were responding to the magnetic stimulus, but also to the strength of the response. They found the strength of the response changed with the orientation of the applied magnetic field. Also, they observed that the strength of the neurological response was greatest when the field strength was approximately the same as Earth’s magnetic field.
This fascinating study might be one step in realizing that we as animals might possess more than our recognized five senses.
Bottom line: Drs. J. David Dickman and Le-Qing Wu at the Baylor College of Medicine in Houston, Texas examined neural activity in the brain stems of pigeons to show that these birds to process a magnetic signal. |
Type of Item
Suffrage Parade, New York City, May 6, 1912
The suffrage parade was a new development in the fight for women’s suffrage in the United States. It was a bold tactic, adopted by suffragists and the more militant suffragettes shortly after the turn of the century. Although some women chose to quit the movement rather than march in public, others embraced the parade as a way of publicizing their cause and combating the idea that women should be relegated to the home. Parades often united women of different social and economic backgrounds. Because they were carried out in ...
Certificate of Ratification of the Nineteenth Amendment to the Constitution, Accompanied by Resolution and Transcript of the Journals of the Two Houses of the General Assembly of the State of Tennessee
The Nineteenth Amendment guarantees all American women the right to vote. The amendment was first introduced in Congress in 1878. Over the years, champions of voting rights pursued different strategies for achieving their goal. Some worked to pass suffrage acts in each state, and by 1912 nine western states had adopted woman suffrage legislation. Others challenged male-only voting laws in the courts. Suffragists also used tactics such as parades, silent vigils, and hunger strikes. Often supporters met fierce resistance as opponents heckled, jailed, and sometimes physically abused them. By 1916 ... |
Civil Rights and the Courts
In the first half of the 20th century, civil rights were pursued primarily through the court system, as activists, organizations like the NAACP, and lawyers worked to overturn laws that permitted segregation and exclusion based on race. One of the greatest victories was the 1954 Supreme Court decision, Brown v. Board of Education of Topeka, Kansas, which overturned the precedent of Plessy v. Ferguson (1896) that had legalized the doctrine of "separate but equal." "Separate but equal" provided the legal justification for segregation of facilities and services, including the school system. In practice, segregation of the school system in most communities was anything but equal, with the majority of resources going to white schools, which left African American students with access only to an inferior educational system. Moreover, segregation was psychologically harmful to students, cultivating a sense of inferiority in African American children and hampering their educational development.
In Brown v. Board, the Supreme Court unanimously declared segregated schools "inherently unequal" and unconstitutional, but did not order a clear timetable for the implementation of integration. Because some prominent southern politicians did not accept the Brown decision and blocked desegregation, the implementation of Brown led to some of the most vicious and protracted fights in the Civil Rights Movement. Little Rock, Arkansas was the site of the most famous resistance to desegregation, when Governor Orval Faubus called in the Arkansas National Guard to block nine black students from entering Little Rock Central High School in 1957. Unable to enter the school, the Little Rock Nine (as they were called) were harassed by a mob and threatened with lynching. The crisis attracted the attention of the nation, as well as of President Eisenhower, who met with the Governor and warned him not to interfere with the implementation of the Brown ruling. Ultimately, Eisenhower ordered the Army to Little Rock and federalized the Arkansas National Guard to remove them from Governor Faubus's control. By the end of September, the Little Rock Nine were able to enter the school with Army escort, though they suffered verbal and physical abuse from their fellow classmates throughout the tense school year. The following year (1958-59), all of Little Rock's high schools were shut to prevent further desegregation. (Many white students attended private schools that year, whereas most African American students, who did not have that option, lost a year of schooling.)
Segregation in the North
Though segregation was not the law in the northern states, neither were most school systems well integrated. Because school assignment was usually linked to neighborhood, the existence of residential segregation led to de facto segregation of the school system, (meaning they were segregated in reality, but not by law or de jure). Because school budgets were often linked to property taxes, poor neighborhoods tended to have poorer schools with inferior facilities. And the schools with a large non-white population tended to be staffed by inexperienced teachers who did not have seniority to choose a school district with more money and better resources. De facto segregation remained (and, in some places, remains) a common issue in the North, even many years after de jure segregation was outlawed in the South. Since there were no laws involved, de facto segregation was harder to combat, and in some ways more insidious, than de jure segregation.
De facto segregation of schools in the North could be a complicated issue for Jews. Though the majority of northern Jews supported civil rights, they also placed a great deal of emphasis on education and wanted their children to attend the best schools. Therefore, while they believed in integration in theory, Jews sometimes were unwilling to sacrifice their own children's education to that ideal. In addition, many of the white teachers and administrators in urban school systems like New York's were Jewish, so conflicts between the school faculty/administration and students/parents – as in the case of the Ocean Hill-Brownsville school crisis in 1968 (see Unit 3 lesson 2) – often pitted Jews against African Americans.
Of course, this was not always the case, as illustrated by Skipwith v. New York City Board of Education – a court case that was decided by a prominent Jewish judge in favor of the black plaintiffs, holding the Board of Education responsible for de facto segregation of the schools.
Skipwith v. New York City Board of Education
In 1958 the Skipwith family, which lived in New York City, boycotted their local public school because they felt their daughter was getting an inferior education. This school, like their neighborhood, was mostly black, and the teachers who worked at the school mostly were relatively inexperienced young, white teachers. Since the Skipwiths were not allowed to enroll their daughter in a school in a different neighborhood, the Skipwiths chose not to send their daughter to school at all. As a result, the New York City Board of Education charged the family with neglect. (Other black families in their neighborhood joined them in their boycott.)
The case, Skipwith v. New York City Board of Education, dealt with de facto segregation of the schools and the related issue of residential segregation. The case came before Judge Justine Wise Polier on the Family Court. Her handling of this case helped bring attention to the issue of de facto segregation. Her decision, presented on December 15, 1958, indicated that while the Board of Education was not responsible for de facto segregation, it was at fault for not providing qualified teachers to schools with fewer white students. (The School Board's policy up to this point was to allow teachers to choose their schools and few qualified, experienced teachers chose to work at non-white majority public schools.) Judge Polier went on to say that until the Board of Education rectified the situation, parents could not be punished for boycotting the schools. Each child had the right to a quality education.
Justine Wise Polier (1903-1987)
Justine Wise Polier was born in 1903 to Rabbi Stephen Wise and Louise Waterman Wise. Her father, a founder of the American Jewish Congress, was a leading advocate of an Israeli state, one of the earliest supporters of the National Association for the Advancement of Colored People (NAACP), and a pro-labor activist. Her mother was a painter, an ardent Zionist, and founder of a Jewish foster care and adoption agency.
As she came of age, Polier grew into the social activism modeled by her parents. While in college, she lived in a settlement house and taught English to immigrants. In her last year in college, she did research on women's workplace injuries and the inadequacy of their workmen's compensation. After college, Polier wanted to experience first-hand the lives of women laborers. Using her mother's maiden name (because her father's pro-labor stance was well known) she got a job in a textile factory, but anti-union spies discovered her identity and she was quickly blacklisted from the factory.
Polier then enrolled in Yale Law School. When a strike broke out at her former workplace, she commuted from Yale to participate in it, giving fiery speeches against the mill's "feudal tyranny" and terrible conditions and helping workers win the right to unionize.
After she graduated from law school, Polier worked for the Workman's Compensation Division in New York, helping to eliminate system-wide corruption and draft new laws. In 1935, she became the first woman judge in New York State, when was appointed to the Domestic Relations Court, where she spent the next thirty-eight years. Polier's most famous and influential decision was the 1958 Skipwith case.
Polier regularly used her position to fight for the poor and disempowered. She tried to implement juvenile justice law as treatment, not punishment, making her court the center of a community network that encompassed psychiatric services, economic aid, teachers, placement agencies, and families. She also fought religious and racial discrimination. At the time, New York relied for social services on private religious agencies, which often denied treatment and foster care to non-white children. No private services at all existed for black Protestant boys. Polier was so appalled that in 1936 she brought twenty cases to the mayor, who sent her to the Episcopalian Mission Society, which agreed to open the Wiltwyck School for boys.
Despite the changes brought by the Civil Rights Movement in the 1950s and 1960s, Polier spent her final years as a judge still battling institutional racism. When Polier's attempts to find foster care for Shirley Wilder, an African American girl, were rejected by every agency, she helped initiate a class action suit. Wilder v. Sugarman, begun in 1974, charged public and private foster care agencies with unconstitutionally discriminating on the basis of religion and race. The controversial case spanned more than two decades.
Polier also served as vice-president of the American Jewish Congress and president of its women's division. Together with her husband Shad Polier, she spoke out about the importance of the Jewish community's commitment to civil rights.
Much of Polier's motivation came from her Jewish heritage. For her, to be a Jew meant an unwavering commitment to uphold the rights of all people and a moral obligation to speak out against injustice. She was deeply moved by the Jewish commitment to justice (often defined as a "prophetic tradition" because the biblical prophets were known for condemning rote religious behavior and corruption and for challenging their communities to live justly) and she spoke of this "vital heritage" as the most important guiding force in her life. Polier criticized American Jews for losing themselves in materialism and abandoning their responsibility to justice for "all human beings." |
Hemorrhagic fevers are caused by viruses that exist throughout the world. However, they are most common in tropical areas. Early symptoms, such as muscle aches and fever, can progress to a mild illness or to a more debilitating, potentially fatal disease. In severe cases, a prominent symptom is bleeding, or hemorrhaging, from orifices and internal organs.
Although hemorrhagic fevers are regarded as emerging diseases, they probably have existed for many years. This designation isn't meant to imply that they are newly developing, but rather that human exposure to the causative viruses is increasing to the point of concern.
These viruses are maintained in nature in insect, arthropod (insects, spiders and other invertebrates with external hard skeletons), or animal populations—socalled disease reservoirs. Individuals within these populations become infected with a virus but do not die from it. In many cases, they don't even develop symptoms. Then the viruses are transmitted from a reservoir population to humans by vectors—either members of the reservoir population or an intervening species, such as mosquitoes.
Hemorrhagic fevers are generally endemic, or linked to specific locations. If many people reside in an endemic area, the number of cases may soar. For example, dengue fever, a type of hemorrhagic fever, affects approximately 100 million people annually. A large percentage of those infected live in densely populated southeast Asia; an area in which the disease vector, a mosquito, thrives. Some hemorrhagic fevers are exceedingly rare, because people very infrequently encounter the virus. Marburg hemorrhagic fever, which has affected fewer than 40 people since its discovery in 1967, provides one such example. Fatality rates are also variable. In cases of dengue hemorrhagic fever-dengue shock syndrome, 1–5% of the victims perish. On the other end of the spectrum is Ebola, an African hemorrhagic fever, that kills 30–90% of those infected.
The onset of hemorrhagic fevers may be sudden or gradual, but all of them are linked by the potential for hemorrhaging. However, not all cases progress to this very serious symptom. Hemorrhaging may be attributable to the destruction of blood coagulating factors or to increased permeability of body tissues. The severity of bleeding ranges from petechiae, which are pinpoint hemorrhages under the skin surface, to distinct bleeding from body orifices such as the nose or vagina.
Causes and symptoms
The viruses that cause hemorrhagic fevers are found most commonly in tropical locations; however, some are found in cooler climates. Typical disease vectors include rodents, ticks, or mosquitoes, but person-to-person transmission in health care settings or through sexual contact can also occur.
Ebola is the most famous of the Filoviridae, a virus family that also includes the Marburg virus. Ebola is endemic to Africa, particularly the Republic of the Congo and Sudan; the Marburg virus is found in sub-Saharan Africa. The natural reservoir of filoviruses is unknown. The incubation period, or time between infection and appearance of symptoms, is thought to last three to eight days, possibly longer.
Symptoms appear suddenly, and include severe headache, fever, chills, muscle aches, malaise, and appetite loss. These symptoms may be accompanied by nausea, vomiting, diarrhea, and abdominal pain. Victims become apathetic and disoriented. Severe bleeding commonly occurs from the gastrointestinal tract, nose and throat, and vagina. Other bleeding symptoms include petechiae and oozing from injection sites. Ebola is fatal in 30–90% of cases.
Viruses of the Arenaviridae family cause the Argentinian, Brazilian, Bolivian, and Venezuelan hemorrhagic fevers. Lassa fever, which occurs in west Africa, also arises from an arenavirus. Infected rodents, the natural reservoir, shed virus particles in their urine and saliva, which humans may inhale or otherwise come in contact with.
Fever, muscle aches, malaise, and appetite loss gradually appear one to two weeks after infection with the South American viruses. Initial symptoms are followed by headache, back pain, dizziness, and gastrointestinal upset. The face and chest appear flushed and the gums begin to bleed. In about 30% of cases, the disease progresses to bleeding under the skin and from the mucous membranes, and/or to effects on the nervous system, such as delirium, coma, and convulsions. Untreated, South American hemorrhagic fevers have a 10–30% fatality rate.
Lassa fever also begins gradually, following an 8–14 day incubation. Initial symptoms resemble those of the South American hemorrhagic fevers, followed by a sore throat, muscle and joint pain, severe headache, pain above the stomach, and a dry cough. The face and neck become swollen, and fluid may accumulate in the lungs. Bleeding occurs in 15–20% of infected individuals, mostly from the gums and nose. Overall, the fatality rate is lower than 2%, but hospitals may encounter 20% fatality rates, treating typically the most serious of cases.
Yellow fever occurs in tropical areas of the Americas and Africa and is transmitted from monkeys to humans by mosquitoes. The virus may produce a mild, possibly unnoticed illness, but some individuals are suddenly stricken with a fever, weakness, low back pain, muscle pain, nausea, and vomiting. This phase lasts one to seven days, after which the symptoms recede for one to two days. Symptoms then return with greater intensity, along with jaundice, delirium, seizures, stupor, and coma. Bleeding occurs from the mucous membranes and under the skin surface, and dark blood appears in stools and vomit.
Mosquitoes also transmit the dengue virus. Dengue fever is endemic in southeast Asia and areas of the Americas. Cases have also been reported in the Caribbean, Saudi Arabia, and northern Australia. This virus causes either the mild dengue fever or the more serious dengue hemorrhagic fever-dengue shock syndrome (DHF-DSS).
In children, dengue fever is characterized by a sore throat, runny nose, slight cough, and a fever lasting for a week or less. Older children and adults experience more severe symptoms: fever, headache, muscle and joint pain, loss of appetite, and a rash. The skin appears flushed, and intense pain occurs in the bones and limbs. After nearly a week, the fever subsides for one to two days before returning. Minor hemorrhaging, such as from the gums, or more serious gastrointestinal bleeding may occur.
DHF-DSS primarily affects children younger than 15 years. The symptoms initially resemble those of dengue fever in adults, without the bone and limb pain. As the fever begins to abate, the individual's condition worsens and hemorrhaging occurs from the nose, gums, and injection sites. Bleeding is also seen from the gastrointestinal, genitourinary, and respiratory tracts.
The Bunyaviridae family includes several hundred viruses but only a few are responsible for hemorrhagic fevers in humans.
Rift Valley fever is caused by the phlebovirus, found in sub-Saharan Africa and the Nile delta. Natural reservoirs are wild and domestic animals, and transmission occurs through contact with infected animals or through mosquito bites. The incubation period lasts 3–12 days. Most cases of Rift Valley fever are mild and may be symptomless. If symptoms develop, they include fever, backache, muscle and joint pain, and headache. Hemorrhagic symptoms occur rarely; while death, which occurs in fewer than 3% of cases, is attributable to massive liver damage.
Crimean-Congo hemorrhagic fever is caused by nairovirus and occurs in central and southern Africa, Asia, Eurasia, and the Middle East. The virus is found in hares, birds, ticks, and domestic animals and may be transmitted by ticks or by contact with infected animals. The nairovirus incubation period is three to 12 days; after which an individual experiences fever, chills, headache, severe muscle pain, pain above the stomach, nausea, vomiting, and appetite loss. Bleeding under the skin and gastrointestinal and vaginal bleeding may develop in the most severe cases. Death rates range from 10% in southern Russia to 50% in parts of Asia.
Hemorrhagic fever with renal (kidney) syndrome is caused by the hantaviruses: Hantaan, Seoul, Puumala, and Dobrava. Hantaan virus occurs in northern Asia, the Far East, and the Balkans; Seoul virus is found worldwide; Puumala virus is found in Scandinavia and northern Europe; while Dobrava virus occurs in the Balkans. Wild rodents are the natural reservoirs and transmit the virus via their excrement or body fluids or through direct contact. Initial symptoms develop within 10–40 days and include fever, headache, muscle pain, and dizziness. Other symptoms are blurry vision, abdominal and back pain, nausea, and vomiting. High levels of protein in the urine signal kidney damage; hemorrhaging may also occur. Death rates range from 0–10%.
Since the hemorrhagic fevers share symptoms with many other diseases, positive identification of the disease relies on evidence of the viruses in the bloodstream—such as detection of antigens and antibodies—or isolation of the virus from the body. Disruptions in the normal levels of bloodstream components may be helpful in determining some, but not all, hemorrhagic fevers.
Lassa fever, and possibly other hemorrhagic fevers, respond to ribavirin, an antiviral medication. However, most of the hemorrhagic fever viruses can only be treated with supportive care. Such care centers around maintaining correct fluid and electrolyte balances in the body and protecting the patient against secondary infections. Heparin and vitamin K administration, coagulation factor replacement, and blood transfusions may be effective in lessening or stopping hemorrhage in some cases.
Recovery from some hemorrhagic fevers is more certain than from others. The filoviruses are among the most lethal; fatality rates for Ebola range from 30–90%, while DHF-DSS cases result in a 1–5% fatality rate. Whether a case occurs during an epidemic or as an isolated case also has a bearing on the outcome. For example, isolated cases of yellow fever have a 5% mortality rate, but 20–50% of epidemic cases may be fatal.
Permanent disability can occur with some types of hemorrhagic fever. About 10% of severely ill Rift Valley fever victims suffer retina damage and may be permanently blind, and 25% of South American hemorrhagic fever victims suffer potentially permanent deafness.
Proper treatment is vital. In cases of DHF-DSS, fatality can be reduced from 40–50% to less than 2% with adequate medical care. For individuals who survive hemorrhagic fevers, prolonged convalescence is usually inevitable. However, survivors seem to gain lifelong immunity against the virus that made them ill.
Hemorrhagic fevers can be prevented through vector control and personal protection measures. Attempts have been made in urban and settled areas to destroy mosquito and rodent populations. In areas where such measures are impossible, individuals can use insect repellents, mosquito netting, and other methods to minimize exposure.
Vaccines have been developed against yellow fever, Argentinian hemorrhagic fever, and Crimean-Congo hemorrhagic fever. Vaccines against other hemorrhagic fevers are being researched.
Garrett, Laurie. The Coming Plague: Newly Emerging Diseases in a World Out of Balance. New York: Farrar, Straus, and Giroux, 1994.
Gorbach, Sherwood L., John G. Bartlett, and Neil R. Blacklow, eds. Chapters 266, 267, and 269 in Infectious Diseases. 2nd ed. Philadelphia: W. B. Saunders Co., 1998.
Lacy, Mark D., and Raymond A. Smego. "Viral Hemorrhagic Fevers." Advances in Pediatric Infectious Diseases 12(1997): 21.
Le Guenno, Bernard. "Emerging Viruses." Scientific American (Oct. 1995): 56.
Antigen—A specific feature, such as a protein, on an infectious agent. Antibodies use this feature as a means of identifying infectious intruders.
Endemic—Referring to a specific geographic area in which a disease may occur.
Hemorrhage—As a noun, this refers to the point at which blood is released. As a verb, this refers to bleeding.
Incubation—The time period between exposure to an infectious agent, such as a virus or bacteria, and the appearance of symptoms of illness.
Petechiae—Pinpoint hemorrhages that appear as reddish dots beneath the surface of the skin.
Reservoir—A population in which a virus is maintained without causing serious illness to the infected individuals.
Ribavirin—A drug that is used to combat viral infections.
Vector—A member of the reservoir population or an intervening species that can transmit a virus to a susceptible victim. Mosquitoes are common vectors, as are ticks and rodents. |
CHAPTER 4: The King's Soldiers
The success of the Carignan-Salières Regiment ensured an era of peace and prosperity in New France. The colonists could finally settle down to their tasks without having to fear constantly for their lives. The forts along the Richelieu not only inhibited all movement from the south but also provided bases from which to carry war into the heart of Iroquois country. In other words, the initiative had passed into the hands of the French. The routes to the West and its territory rich in fur lay open to their explorers and traders. Finally, the nations annihilated by the Iroquois were replaced by Ottawas, Ojibwas and Algonquins as trading partners and military allies. The military campaigns had indeed bestowed enormous benefits on New France. |
Law of Sines
From Math Images
|Law of Sines|
Law of Sines
- The law of sines is a tool commonly used to help solve arbitrary triangles. It is a formula that relates the sine of a given angle to its opposite side length.
Basic DescriptionIn any triangle, there is a relationship between the measures of the angles and the lengths of the sides: the largest angle is opposite the longest side, the second-largest angle is opposite the second-longest side, and the smallest angle is opposite the shortest side.
The law of sines is an equation that more precisely expresses this relationship between the angles of a triangle and the length of their opposite sides. The law of sines states that the ratio between a length of one side of a triangle and the sine of its opposite angle is equal for all three sides. Specifically:
Given a triangle with side lengths and opposite angles ,
The law of sines is used to find all of the lengths of the sides and the angle measures for an arbitrary triangle given only some of this information. This process is called solving a triangle. To use the law of sines in solving triangles, at least three elements of a triangle must be known. Whenever a side length and two angles are given, the law of sines can be used to solve the triangle.
In some cases, the law of sines can provide multiple solutions to a triangle. If two adjacent side lengths are given with one of the opposite angles, the law of sines cannot definitively determine the triangle, but instead offers zero, one, or two possible solutions in what is known as the ambiguous case.
The law of sines does not help with solving a triangle in several cases. With two known side lengths and the measure of the angle between, there is no way to use the law of sines to solve the triangle because no pair of opposite angle measure and side length is provided. The law of sines by itself is also not able to provide solutions when three side lengths are provided. Instead, the law of cosines is often used for solving triangles in these cases.
A More Mathematical Explanation
Two DerivationsThere are at least two different ways to derive the law of sines: using the area [...]
There are at least two different ways to derive the law of sines: using the area formula and using the definition of sine.
The formula for area of a triangle uses the lengths of the base and height. By using these lengths and the angle measures of a triangle, we can derive the law of sines.
A triangle can be oriented so that any one side can be used as the base. Depending on which side is chosen as the base of the triangle, the height may be different. Let be the height when the side of length is the base. When is the base, is the distance from a vertex to the opposite side, such that is perpendicular to the side. When is oriented as the base of the triangle, runs perpendicular to side and is the distance from side to the vertex .
First, we must determine the height of the triangle for each orientation of the base.
In any triangle,
Since the area of the triangle is the same no matter how the triangle is oriented, the area of the triangle with as the base is the same as the area of the triangle with as the base.
Substituting the formula for the area of a triangle,
Both and can be written in terms of side lengths and angles as shown in the "More on Height" section. Therefore, we can substitute for and for , giving us
Multiplying both sides by and dividing by gives us
Then, rearranging once more gives us our equation in its most common form,
Since we can orient the base differently and go through the same process with other variables, we know that , so
which is the law of sines.
Using the Definition of Sine
We know that, in a right triangle,
Letting represent height and represent the lengths of the sides opposite , respectively, plug in the appropriate measures to solve for .
Clearing the fractions,
Clearing the fractions,
Set both equations for equal to each other to get
Divide both sides by for
Since we can go through the same process using a different angle and different variables, we know that , so
A Geometric Extension
For every triangle, there is some circle for which the vertices of the triangle lay on the circumference. This triangle is known as an inscribed triangle, and the circle is known as the circumcircle or circumscribed circle.
By the extended law of sines,
where is the radius of the circumcircle.
Let there be two inscribed triangles on a circle of radius . Let be a triangle that has a hypotenuse that goes through the center of the circle. Let be an oblique triangle that shares with .
Angle is equal to angle because they are both inscribed angles that cut the same arc. According to properties of inscribed angles, two inscribed angles that cut the same arc in circles of the came radius are equal. Since and are the same, so are and .
Substituting for gives us
Solving for gives us
Since is the diameter,
since is the length of the side opposite .
By the law of sines, we know that
and therefore by the transitive property,
Solve the triangle. Find all of its parts, , given , ,.
- There are currently no teaching materials for this page. Add teaching materials.
All images were made by the page's author using Adobe Photoshop and Cinderella2.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. |
The Eighteenth Amendment to the Constitution was ratified on this day in 1919, marking the end of one era and the beginning of another. Outlawing the “manufacture, sale and transportation of intoxicating liquors,” the amendment represented a victory for those in the temperance movement who believed many of society’s evils could be traced to the prevalence of alcoholic beverages.
There were more saloons than schools, hospitals, libraries parks or churches. Beer drinking was so popular that by 1910, the annual per capita consumption had risen to 21 gallons, with those prone to violence tending to drink more than their fair share of that average. So how was the ban on intoxicating beverages passed by Congress and ratified by the states so rapidly? World War I played a large role. The majority of brewers and a great percentage of beer drinkers were of German descent, and even though the companies might now be run by second and third generation Americans, prohibition supporters gained a lot of ground by arguing that it was unpatriotic to support the beer industry.
But while the old saloons closed and the word virtually disappeared from our language, inebriation did not. It just became more dangerous.
The saloons didn’t close right away, of course. The Amendment set a one year deadline before the ban would take effect, and those with stores of wine and spirits began hiding, hoarding and acquiring more. Supporters of the ban, which had temporarily been in effect during World War I, believed Americans would see the benefits of an alcohol-free society and soon there would be little need to enforce the law. They estimated enforcement costs at five million dollars a year.
They were wrong. By 1923, Congress was being asked for over $28 million for enforcement, and estimates ran as high as $300 million a few years later.
Americans did not willingly give up their intoxicating beverages. They hid them. And they changed them.
Alcohol had to be smuggled in or made in secret. Those who risked breaking the law to import the contraband wanted to maximize their profits. So they concentrated on distilled spirits, with a much greater percentage of alcohol. Profits were further extended by cutting the booze with other substances to make it go farther. Water would be obvious and noticed by patrons, so they often used something to give the drink a little kick, like industrial methanol or even embalming fluid.
Creative bartenders mixed the bad booze with fruit juice and sugary syrups to mask the taste.
But the poisonous affects were not as easy to hide. By 1926, 750 people in New York alone had been killed by toxic cocktails, and thousands more paralyzed or made blind from the concoctions downed in speakeasies and other underground drinking establishments.
And then there was the problem of the mobs controlling the supply. Instead of reducing crime, the ban on alcohol created an ideal incubator for the birth of organized crime. The ban was repealed by the 21st Amendment in 1933. It was called “the failed experiment.”
Many people draw parallels between the 1920s prohibition on alcohol and today’s current ban on illegal drugs such as marajuana. There is no doubt that today’s illegal drug industry supports organized crime now just as the illegal bootlegging operations supported crime bosses in the 1920s. Would repealing the ban reduce the strength of the drug lords enough to outweigh the harm that might be done from easier access to such drugs? Hard to say without a crystal ball.
I for one am glad I was born after 1933. I’d hate to live in a world without champagne and margaritas.
Newspaper image courtesy of the Anti-Saloon League Museuem, http://www.wpl.lib.oh.us/AntiSaloon/ |
Information Processing is about how you process, understand and make sense of new concepts, ideas, materials and draw conclusions. Here are a few questions to ask yourself:
- Can you imagine analogies that aid in your memory?
- Can you reason from hypotheses to form conclusions?
- Do you summarize or paraphrase class reading assignments?
- Do you try to related class presentation material to things you already know?
- Do you take effective notes during lecture? From your books?
- Do you have trouble remembering information or recalling facts?
Success in College depends on your:
- Ability to use organizational strategies and reasoning skills to connect what is already known with what is being learned
- Knowledge acquisition
- Knowledge retention
If you have problems with information processing, you may want to:
- Find techniques to help them make information personally meaningful
- Store information in ways that heighten accessibility
- Complete the online module for information processing
Students should review these online presentations:
- Surviving Large Lectures
- How Can I Remember Everything
- Effective Note Taking Techniques
- Reading Techniques that Make Sense – SQ4R
- Preparing for Academic Success
There are four (4) areas critical to information processing:
Good Lecture Notes
Good Information Processing involves – Active Listening
Because most classes involve lectures, listening skills are critical for success in college. Listening is not merely hearing a speaker; it is comprehension of what is being said and absorbing the meaning. Such intentional, careful attention is called active “listening.”
- Sit in front of the room
- Sit up straight
- Look at the speaker
- React to what is being said
- Ask questions and listen to the answers
- Identify the main idea (what’s the most important point of the lecture?)
- Listen for major details (what supports the main point?)
- Note the key words, especially if they are unfamiliar
- Paraphrase the information when writing it down
- Allow themselves to become distracted
- Do rote (mindless) note-taking
- Emotionally reject the subject or speaker
Good Information Processing involves – Taking Good Lecture Notes
To help you better understand and remember the content of lectures, record a speaker’s ideas while they are being presented. There are several available note taking methods including: Cornell, Charting, Mapping, Outline and Sentence. Links to these examples are also provided below.
Whichever note taking method you decide is best for you, these 8 are items are essential:
- Full-sized, three-ring notebooks are best for containing all lecture notes, handouts, and notes from the text and readings. Why? Pages can be arranged chronologically with pertinent handouts inserted into lecture notes for easy reference. If you miss a lecture, you can easily add the missing notes. Course materials are together in one notebook.
- Date and number your note pages and your handouts. It will help with continuity.
- Give yourself plenty of blank spaces in your notes, as well as plenty of room to write. This will allow you to make additional notes, sketch helpful graphics, or write textbook references. Your notes will be easier to read if you write in pen and use only one side of the paper.
- Law-ruled or summary-margin paper is helpful with its three-inch margin on the left side of the page. If you can’t find this paper, draw the margin on each piece of paper. This sets one up for using the Cornell format of note taking. Write your notes on the right side of the line. After the lecture, use the left margin for key words or phrases, or sample questions when you review the notes.
- Take as many notes as you can. If you miss something, leave a space; you may be able to fill in the blanks later. Do not stop taking notes if you are confused or if you want to ponder a particular concept. You will have time for that later. Abbreviations are extremely helpful. Suggestions for abbreviations are listed in this section.
- It may be difficult to make your notes look great or to have them extremely organized as you write them. Work with your notes as soon after class as possible when your recall is at its best. You may be able to fill in some blanks. Color coding can bring some organization to your notes. For example, identify concepts and categories by highlighting items with a particular color. If you still have problems organizing your notes, begin to formulate a specific question for your professor or study groups.
- As you review your notes, look at the information as answers to questions. As these questions become clearer to you, jot down the questions in the left margin. You may also write key words or phrases in the left hand margin that cue your recall of definitions, theories, models, or examples. Now you are ready to try to recall the information in your notes. Cover the right side of your notes, leaving only these cues (whether there are questions or key words) to test yourself.
- As you begin to put the material of the course together, add a somewhat generic question - WHY? - To your answers. You need to know why any particular answer is correct. You need to know why the information is pertinent to the course. This will also prepare you for essay exams, as well.
Good Information Processing Means Possessing – Reading Techniques
Reading can be challenging! Like any other skill such as playing the piano or basketball or working algebra problems, spending time developing this skill will eventually make reading more easily. The more you do it, the simpler it gets, and the more enjoyable it becomes. The most common reading technique is SQ4R, which is linked below. There are also other downloadable documents related to reading.
Whichever reading technique you determine works best for you, here are some basic guidelines you will need to follow.
Give Yourself Enough Time
Because essays always propose a line of reasoning, if you stop in the middle, you run the risk of forgetting what came before. Not only do you have to read the whole essay, but you have to understand it too. A large part of that understanding involves following the process of the author’s reasoning. So, give yourself plenty of time to read completely through the assignment.
Use All Available Study Aids
If you are reading from a textbook, make good use of all the study aids the author or editor(s) offer. Read the introductions, summaries, glossaries, and indexes. Examine the study questions, take advantage of any section headings, margin notes and boxed passages if your textbook offers them. All of these are instructional features that can help you read the book more easily. Take advantage of them!
Grant All Ideas a Fair Hearing
One good rule to follow when you are reading is what’s called the “Principle of Charity.” If your instructor asked you to read the material, he/she most likely thinks that there is something valuable to be learned from the essay. Be charitable. Grant all ideas a fair hearing, even if (and especially if) you do not agree with them. People have the most trouble understanding and remembering ideas they disagree with, so this is something to work on.
Read and Reread
You can rarely read an essay just once and completely understand it. Some writings demand careful, slow and repeated reading. Reread as often as you need to, to understand what the author is saying. However, don’t spend so much time rereading the passage that you get discouraged.
Change Your Surroundings
If you are experiencing a great deal of frustration or difficulty with your reading, consider finding a new place to read. If you are tired, distracted, uncomfortable, hungry, thirsty or whatever, you may have difficulties with our reading. The better you can make the atmosphere, the better your comprehension is likely to be.
Always read actively –that is you must be constantly asking yourself: What is the main point? Why did the author just say that? What are the author’s reasons for believing this? Do I agree or disagree with this point? Keep a pencil, a highlighter, a pad of sticky notes, or a note pad handy. Mark passages that seem important or passages that you don’t understand. However, don’t highlight every sentence!
Keep A Dictionary
Merriam-Webster’s Collegiate Dictionary is a good comprehensive dictionary that can often be found on sale for a reasonable price. A paperback pocket dictionary will not be adequate. Many of the authors are from a multitude of scholarly areas and tend to use large and sometimes obscure words. So using a good dictionary is critical.
Website: www.dictionary.com is a great online resource and quick to access.
Stop And Summarize What You Have Read
After you finish a section or a page, pause and see if you can restate what the author is saying in your own words. As you read, regularly stop. Close your eyes and mentally summarize the main points of what you have read. If you are ambitious, actually writing your summary down is even better; since it helps you remember what you’ve read.
Look for the Essay’s Main Point
On your first reading of the essay, you should be looking for the author’s conclusions. Ask yourself: What is the author trying to prove? Just grasping the main point is a large enough part of the battle. If there are passages or details that you find particularly difficult even after reading them several times, skip over them and you will understand them better.
Identify the Essay’s Premise
Once you understand the point or points the author is trying to prove, you need to figure out what his reasons are. On your second reading, ask yourself: Why does she think her conclusion is true? As a rule, all essays offer a chain of ideas, or premises. Premises are meant to provide reasons leading to the overall conclusion. The primary task in reading is to identify the author’s premises and conclusions.
Talk to Your Instructor
If you still do not understand an essay after following all these suggestions, then you should consult your instructor. Your instructor is one of your most important resources and is more that happy to help. Clarify, or just chat about your readings.
For more information on how to use SQ4R, and other reading methods, click on the documents below:
SQ4R Reading Method
Speed Reading Techniques
Adapted from Anne Michaels Edwards “Writing to Learn.” 2000.
Highlighting In Your Textbook
Mark Your Textbooks!
Highlighting information in your textbooks is a great way to gage your complete understanding of the material. Often times, students get highlighter happiness and entire pages get a splash of color. This is NOT the best way to determine what is most important, and usually every word, paragraph and page is deemed important. So, to help the happy highlighters, there are nine (9) basics for proper highlighting in your textbook.
- Finish reading before marking. Never mark until you have finished reading a full paragraph or headed section and have paused to think about what you just read. The procedure will keep you from grabbing at everything that looks important at first glance.
- Be extremely selective. Don’t underline or jot down so many items that they overload your memory or cause you to try to think in several directions at once. Be stingy with your markings, but don’t be so brief that you’ll have to read through the page again when you review.
- Use your own words. The jottings in the margins should be in your own words. Since your own words represent your own thinking they will later be powerful cues to the ideas on the page.
- Be brief. Underline brief but meaningful phrases, rather than complete sentences. Make your marginal jottings short and to the point. They will make a sharper impression on your memory, and they will be easier to use when you recite and review.
- Be swift. You don’t have all day for marking. Read, go back for a mini-overview, and make your markings. Then attack the next portion of the chapter.
- Be neat. Neatness takes conscious effort, not time. Later when you review, the neat marks will encourage you and save time, since the ideas will be easily and clearly perceived.
- Organize facts and ideas under categories. Items within categories are far more easily memorized than random facts and ideas.
- Try cross-referencing. For example, if you find an idea on page 64 that has a direct bearing on an idea back on page 28, draw a little arrow pointing upward and write “28” by it. Then turn back to page 28 and alongside the idea there, draw an arrow pointing downward and write “64” by it. In this way you’ll tie the two ideas together, in your mind and in your reviewing.
- Be systematic. There are many ways to mark the text: single and double underlines; the use of asterisks, circling, boxing for important items; and the use of top and bottom margins for longer notations. If some of these ideas appeal to you, work them into your marking system, one or two at a time. But use them consistently so you will remember what they mean at review time.
Using graphic organizers is another way for students, especially visual learners, to process information.
- About Us
- Peer Mentors
- Study Strategies
- CONNECT Application
- CONNECT STEM |
Our heart is a two-sided pump. Its function is to guarantee the blood flow between the lungs, where blood is oxygenated, and the body tissues, where oxygen is used as nutriment for the tissues' cells.
Our heart is made up of four chambers. The chambers are compartments like small rooms whose function is to store the blood and then push it outside.
The two upper chambers, the atria, receive the blood returning into the heart through the veins. The two lower ones, the ventricles, pump the blood to the body through the arteries.
The right atrium receives all the returning blood from the upper and lower parts of the body. It then transfers this blood through the tricuspid valve to the right ventricle, which then pumps it through the pulmonary valve out to the lungs. In the lungs, carbon dioxide is exchanged for oxygen, then the blood returns to the left atrium, which transfers it though the mitral valve into the left ventricle.
The left ventricle then pumps the blood through the aortic valve out to the body through the arteries, where the blood supplies tissues with oxygen and removes carbon dioxide. The blood, now depleted of oxygen, is returned to the right atrium by the veins. The left part of the heart needs to exert a stronger force than the right part; therefore the left cardiac walls are thicker and stronger.
The atrioventricular valves separate the upper from the lower chambers; one of these is called the mitral valve, which divides the left atrium and the left ventricle, and the other is the tricuspid valve, found between the right atrium and the right ventricle. The tricuspid valve is so-called because it has three leaflets, while the mitral valve is called bicuspid, because it has just two leaflets. The mitral valve has a complex structure: its cusps are retained by the cordae tendinae, which are linked to the papillary muscles. These structures avoid the prolapse of the valve edge into the atrium.
The outflow valves separate the heart from the two main arteries: the aorta and the pulmonary artery. The aortic valve separates the left ventricle from the aorta, the main artery that carries blood to the body. The pulmonary valve separates the right ventricle from the pulmonary artery. The aortic and the pulmonary valves are said to be semilunar because of their leaflets' shape, which are similar to a half moon. The aortic and pulmonary valves are thin structures, without any muscles; they open and close only thanks to blood pressure gradients.
The heart valves close and open in a pulsatile way, filling and emptying the atria and the ventricles. They work as doors, letting in a precise quantity of blood flow in only one direction, and preventing blood backflow. |
History -Where we came from
The unique confluence of culture and circumstance which would become today's Seminole Tribe of Florida can be traced back at least 12,000 years, say researchers. There is ample evidence that the Seminole people of today are cultural descendants of Native Americans who were living in the southeastern United States at least that long ago. By the time the Spaniards "discovered" Florida (1513), this large territory held, perhaps, 200,000 Seminole ancestors in hundreds of tribes, all members of the Maskókî linguistic family.
The first Europeans brought with them new diseases (measles, smallpox, the common cold) that killed thousands of these indigenous people. Competition for land and resources by the warring Spanish, English, and French brought further death and displacement to the natives of the region.
The Spaniards called some of these indigenous Florida people cimarrones, or free people, because they would not allow themselves to be dominated by the Europeans. The word was taken into the Maskókî language and, by the mid 1800s, U.S. citizens referred to all Florida people as "Seminoles."
Survivors of that devastating European intrusion amalgamated in the area that is now known as Florida. Early in the 18th century, the lives and homelands of many more indigenous peoples were similarly disruped, this time by American colonization efforts. Many were Maskókî speakers, from Indian towns across Georgia and Alabama.
Creek, Hitchiti, Apalachee, Mikisúkî, Yamassee, Yuchi, Tequesta, Apalachicola, Choctaw, and Oconee were joined by escaped slaves and others in the pursuit of better lives among the thick virgin forests, wide grass prairies and spring-fed rivers of interior Florida. They shared an instinct for survival and a commonalty of purpose: refusal to be dominated by the white man. |
Rutherford was a New Zealand-born physicist, who won the Nobel Prize for Chemistry for his pioneering work in nuclear physics.
Ernest Rutherford was born on 30 August 1871 in Nelson, New Zealand, the son of a farmer. In 1894, he won a scholarship to Cambridge University and worked as a research student under Sir Joseph Thomson. In 1898, he became professor of physics at McGill University in Montreal, Canada. There, working with chemist Frederick Soddy, he investigated the newly-discovered phenomenon of radioactivity. Rutherford and Soddy proposed that radioactivity results from the disintegration of atoms.
In 1907, Rutherford returned to England to become professor of physics at Manchester University. In 1908, he was awarded the Nobel Prize in Chemistry. In 1914, he was knighted, but the war interrupted his work. He helped to develop methods of dealing with the new menace of submarine warfare, as well as studying underwater acoustics.
In 1917, he returned to physics and a long series of experiments in which he discovered that the nuclei of certain light elements, such as nitrogen, could be 'disintegrated' by the impact of energetic alpha particles coming from some radioactive source, and that during this process fast protons were emitted. This was the first artificially induced nuclear reaction. Rutherford had virtually created a new discipline, that of nuclear physics.
In 1919, Rutherford became professor of experimental physics and director of the Cavendish Laboratory at Cambridge, succeeding Thomson. Many of his students at the Cavendish Laboratory went on to become pioneering scientists. From 1925 to 1930 he was president of the Royal Society (to which he had been elected in 1903). In 1931 he was awarded a life peerage and died on 19 October 1937. He was buried in Westminster Abbey. In 1997, the 'rutherford', a unit of radioactivity, was named in his honour.
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so. |
How Does A Spring Scale Work?
- To investigate Hooke's Law (The relation between force and
stretch for a spring)
- To investigate Newton's Laws and the operation of a spring
Everybody knows that when you apply a force to a spring or a
rubber band, it stretches. A scientist would ask, "How is the force
that you apply related to the amount of stretch?" This question was
answered by Robert Hooke, a contemporary of Newton, and the answer
has come to be called Hooke's Law.
Hooke's Law, believe it or not, is a very important and
widely-used law in physics and engineering. Its applications go far
beyond springs and rubber bands.
You can investigate Hooke's Law by measuring how much known forces
stretch a spring. A convenient way to apply a precisely-known force
is to let the weight of a known mass be the force used to stretch the
spring. The force can be calculated from W = mg. The stretch of the
spring can be measured by noting the position of the end of the
spring before and during the application of the force.
ruler or meter stick
set of known masses
- Be sure to keep your feet out of the area in
which the masses will fall if the spring or rubber band breaks!
- Be sure to clamp the ring stand to the lab table, or weight it
with several books so that the mass does not pull it off the
- You need to hang enough mass to the end of the spring to get a
measurable stretch, but too much force will permanently
damage the spring. (An engineer would say that it has
exceeded its "elastic limit"). "You break it, you
- Assemble the apparatus as shown in the diagram at right. Be
sure to clamp the ring stand to the lab table, or weight it with
- Construct a data table. You will need to record the mass that
you hang from the spring and the position of the end of the spring
before and after the mass is added. From this, you will calculate
the force applied to the spring and the resulting stretch of the
spring. You should allow for at least 8-10 trials. (A sample data
table is shown below.)
- For each trial, record the mass, the starting position of the
spring (before hanging the mass) and the ending position of the
spring (while it is being stretched).
- Repeat the process for a rubber band.
- Calculate the force applied to the spring/rubber band in each
trial (W = mg) Use g = 9.8 m/s2.
- Calculate the stretch of the spring/rubber band in each trial
(the difference in the starting and ending positions).
- Draw graphs of force versus stretch for the spring and the
rubber band. You may be able to put both graphs on the same sheet
of graph paper, depending on the data.
Hooke's Law says that the stretch of a spring is directly
proportional to the applied force. (Engineers say "Stress is
proportional to strain".) In symbols, F = kx, where F is the force, x
is the stretch, and k is a constant of proportionality. If Hooke's
Law is correct, then, the graph of force versus stretch will be a
Do your results confirm or contradict Hooke's Law? Please
Examine a spring scale. It is a simple device that measures force
by measuring the amount that the force stretches a spring.
- Hang an object (at rest) from the spring scale. Draw a set of
diagrams that shows all of the forces that act:
- on the object
- on the spring scale
- What is the net force on:
- the object?
- the spring scale?
- Are the forces that act on the object equal and opposite? Are
they a Newton's Third Law force pair?
- Are the forces that act on the spring scale equal and
opposite? Are they a Newton's Third Law force pair?
- For each force in your diagrams (question 1), indicate its
Newton's Third Law "partner". Be sure to indicate (a) what the
force pushes or pulls on, and (b) its direction.
adapted from Robinson, Conceptual Physics Laboratory
Manual, Addison-Wesley, Experiment 15, Tug-of-War
last update November 6, 2002 by JL |
The Silicate Class
The Silicates are the largest, the most interesting, and the most complicated class of minerals by far. Approximately 30% of all minerals are silicates and some geologists estimate that 90% of the Earth's crust is made up of silicates. With oxygen and silicon the two most abundant elements in the earth's crust, the abundance of silicates is no real surprise.
The basic chemical unit of silicates is the (SiO4) tetrahedron shaped anionic group with a negative four charge (-4). The central silicon ion has a charge of positive four while each oxygen has a charge of negative two (-2) and thus each silicon-oxygen bond is equal to one half (1/2) the total bond energy of oxygen. This condition leaves the oxygens with the option of bonding to another silicon ion and therefore linking one (SiO4) tetrahedron to another and another, etc..
The complicated structures that these silicate tetrahedrons form is truly amazing. They can form as single units, double units, chains, sheets, rings and framework structures. The different ways that the silicate tetrahedrons combine is what makes the Silicate Class the largest, the most interesting and the most complicated class of minerals.
The Silicates are divided into the following subclasses, not by their chemistries, but by their structures:
The simplest of all the silicate subclasses, this subclass includes all silicates where the (SiO4) tetrahedrons are unbonded to other tetrahedrons. In this respect they are similar to other mineral classes such as the sulfates and phosphates. These other classes also have tetrahedral basic ionic units (PO4 & SO4) and thus there are several groups and minerals within them that are similar to the members of the nesosilicates. Nesosilicates, which are sometimes referred to as orthosilicates, have a structure that produces stronger bonds and a closer packing of ions and therefore a higher density, index of refraction and hardness than chemically similar silicates in other subclasses. Consequently, There are more gemstones in the nesosilicates than in any other silicate subclass. Below are the more common members of the nesosilicates. See the nesosilicates' page for a more complete list.
Sorosilicates have two silicate tetrahedrons that are linked by one oxygen ion and thus the basic chemical unit is the anion group (Si2O7) with a negative six charge (-6). This structure forms an unusual hourglass-like shape and it may be due to this oddball structure that this subclass is the smallest of the silicate subclasses. It includes minerals that may also contain normal silicate tetrahedrons as well as the double tetrahedrons. The more complex members of this group, such as Epidote, contain chains of aluminum oxide tetrahedrons being held together by the individual silicate tetrahedrons and double tetrahedrons. Most members of this group are rare, but epidote is widespread in many metamorphic environments. Below are the more common members of the sorosilicates. See the sorosilicates' page for a more complete list.
This subclass contains two distinct groups: the single chain and double chain silicates. In the single chain group the tetrahedrons share two oxygens with two other tetrahedrons and form a seemingly endless chain. The ratio of silicon to oxygen is thus 1:3. The tetrahedrons alternate to the left and then to the right along the line formed by the linked oxygens although more complex chains seem to spiral. In cross section the chain forms a trapezium and this shape produces the angles between the crystal faces and cleavage directions.
In the double chain group, two single chains lie side by side so that all the right sided tetrahedrons of the left chain are linked by an oxygen to the left sided tetrahedrons of the right chain. The extra shared oxygen for every four silicons reduces the ratio of silicons to oxygen to 4:11. The double chain looks like a chain of six sided rings that might remind someone of a child's clover chain. The cross section is similar in the double chains to that of the single chains except the trapezium is longer in the double chains. This difference produces a difference in angles. The cleavage of the two groups results between chains and does not break the chains thus producing prismatic cleavage. In the single chained silicates the two directions of cleavage are at nearly right angles (close to 90 degrees) forming nearly square cross sections. In the double chain silicates the cleavage angle is close to 120 and 60 degrees forming rhombic cross sections making a convenient way to distinguish double chain silicates from single chain silicates. Below are the more common members of the inosilicates. See the Inosilicates' page for a more complete list.
Single Chain Inosilicates:
The Double Chain Inosilicates:
These silicates form chains such as in the inosilicates except that the chains link back around on themselves to form rings. The silicon to oxygen ratio is generally the same as the inosilicates, (1:3). The rings can be made of the minimum three tetrahedrons forming triangular rings (such as in benitoite). Four tetrahedrons can form a rough square shape (such as in axinite). Six tetrahedons form hexagonal shapes (such as in beryl, cordierite and the tourmalines). There are even eight membered rings and more complicated ring structures. The symmetry of the rings usually translates directly to the symmetry of these minerals; at least in the less complex cyclosilicates. Benitoite's ring is a triangle and the symmetry is trigonal or three-fold. Beryl's rings form hexagons and its symmetry is hexagonal or six-fold. The Tourmalines' six membered rings have alternating tetrahedrons pointing up then down producing a trigonal as opposed to an hexagonal symmetry. Axinite's almost total lack of symmetry is due to the complex arrangement of its square rings, triangle shaped borate anions (BO3) and the position of OH groups. Cordierite is pseudo-hexagonal and is analogous to beryl's structure except that aluminums substitute for the silicons in two of the six tetrahedrons. There are several gemstone minerals represented in this group, a testament to the general high hardness, luster and durability. Below are the more common members of the cyclosilicates. See the Cyclosilicates' page for a more complete list.
In this subclass, rings of tetrahedrons are linked by shared oxygens to other
rings in a two dimensional plane that produces a sheet-like structure.
The silicon to oxygen ratio is generally 1:2.5 (or 2:5) because only one oxygen is exclusively
bonded to the silicon and the other three are half shared (1.5) to other silicons.
The symmetry of the members of this group is controlled chiefly by the symmetry of the rings but is usually
altered to a lower symmetry by other ions and other layers.
The typical crystal habit of this subclass is therefore flat, platy, book-like and display good basal cleavage.
Typically, the sheets are then connected to each other by layers of cations.
These cation layers are weakly bonded and often have water molecules and other neutral
atoms or molecules trapped between the sheets. This explains why this subclass
produces very soft minerals such as talc, which is used in talcum powder. Some members
of this subclass have the sheets rolled into tubes that produce fibers as
in asbestos serpentine.
Below are the more common members of the phyllosilicates. See the Phyllosilicates' page for a more complete list.
This subclass is often called the "Framework Silicates" because its structure is composed of interconnected tetrahedrons going outward in all directions forming an intricate framework analogous to the framework of a large building. In this subclass all the oxygens are shared with other tetrahedrons giving a silicon to oxygen ratio of 1:2. In the near pure state of only silicon and oxygen the mineral is quartz (SiO2). But the tectosilicates are not that simple. It turns out that the aluminum ion can easily substitute for the silicon ion in the tetrahedrons up to 50%. In other subclasses this substitution occurs to a more limited extent but in the tectosilicates it is a major basis of the varying structures. While the tetrahedron is nearly the same with an aluminum at its center, the charge is now a negative five (-5) instead of the normal negative four (-4). Since the charge in a crystal must be balanced, additional cations are needed in the structure and this is the main reason for the great variations within this subclass. Below are the more common members of the tectosilicate subclass. See the tectosilicates' page for a more complete list. |
Hemorrhagic strokes include bleeding within the brain (intracerebral hemorrhage) and bleeding between the inner and outer layers of the tissue covering the brain (subarachnoid hemorrhage).
There are two main types of hemorrhagic strokes: intracerebral hemorrhage and subarachnoid hemorrhage. Other disorders that involve bleeding inside the skull include epidural hematomas (see Head Injuries: Epidural Hematomas) and subdural hematomas (see see Head Injuries: Subdural Hematomas), which are usually caused by a head injury. These disorders cause different symptoms and are not considered strokes.
An intracerebral hemorrhage is bleeding within the brain.
Intracerebral hemorrhage accounts for about 10% of all strokes but for a much higher percentage of deaths due to stroke. Among people older than 60, intracerebral hemorrhage is more common than subarachnoid hemorrhage.
Intracerebral hemorrhage most often results when chronic high blood pressure weakens a small artery, causing it to burst. Using cocaine or amphetamines can cause temporary but very high blood pressure and hemorrhage. In some older people, an abnormal protein called amyloid accumulates in arteries of the brain. This accumulation (called amyloid angiopathy) weakens the arteries and can cause hemorrhage.
Less common causes include blood vessel abnormalities present at birth, injuries, tumors, inflammation of blood vessels (vasculitis), bleeding disorders, and use of anticoagulants in doses that are too high. Bleeding disorders and use of anticoagulants increase the risk of dying from an intracerebral hemorrhage.
An intracerebral hemorrhage begins abruptly. In about half of the people, it begins with a severe headache, often during activity. However, in older people, the headache may be mild or absent. Symptoms suggesting brain dysfunction develop and steadily worsen as the hemorrhage expands. Some symptoms, such as weakness, paralysis, loss of sensation, and numbness, often affect only one side of the body. People may be unable to speak or become confused. Vision may be impaired or lost. The eyes may point in different directions or become paralyzed. The pupils may become abnormally large or small. Nausea, vomiting, seizures, and loss of consciousness are common and may occur within seconds to minutes.
Doctors can often diagnose intracerebral hemorrhages on the basis of symptoms and results of a physical examination. However, computed tomography (CT) or magnetic resonance imaging (MRI) is also done. Both tests can help doctors distinguish a hemorrhagic stroke from an ischemic stroke. The tests can also show how much brain tissue has been damaged and whether pressure is increased in other areas of the brain. The blood sugar level is measured because a low blood sugar level can cause symptoms similar to those of stroke.
Intracerebral hemorrhage is more likely to be fatal than ischemic stroke. The hemorrhage is usually large and catastrophic, especially in people who have chronic high blood pressure. More than half of the people who have a large hemorrhage die within a few days. Those who survive usually recover consciousness and some brain function over time. However, most do not recover all lost brain function.
Treatment of intracerebral hemorrhage differs from that of an ischemic stroke. Anticoagulants (such as heparin and warfarin), thrombolytic drugs, and antiplatelet drugs (such as aspirin) are not given because they make bleeding worse. If people who are taking an anticoagulant have a hemorrhagic stroke, they may need a treatment that helps blood clot such as
Surgery to remove the accumulated blood and relieve pressure within the skull, even if it may be life-saving, is rarely done because the operation itself can damage the brain. Also, removing the accumulated blood can trigger more bleeding, further damaging the brain and leading to severe disability. However, this operation may be effective for hemorrhage in the pituitary gland or in the cerebellum. In such cases, a good recovery is possible.
A subarachnoid hemorrhage is bleeding into the space (subarachnoid space) between the inner layer (pia mater) and middle layer (arachnoid mater) of the tissue covering the brain (meninges).
A subarachnoid hemorrhage is a life-threatening disorder that can rapidly result in serious, permanent disabilities. It is the only type of stroke more common among women than among men.
Subarachnoid hemorrhage usually results from head injuries. However, hemorrhage due to a head injury causes different symptoms and is not considered a stroke.
Subarachnoid hemorrhage is considered a stroke only when it occurs spontaneously—that is, when the hemorrhage does not result from external forces, such as an accident or a fall. A spontaneous hemorrhage usually results from the sudden rupture of an aneurysm in a cerebral artery. Aneurysms are bulges in a weakened area of an artery's wall. Aneurysms typically occur where an artery branches. Aneurysms may be present at birth (congenital), or they may develop later, after years of high blood pressure weaken the walls of arteries. Most subarachnoid hemorrhages result from congenital aneurysms.
Less commonly, subarachnoid hemorrhage results from rupture of an abnormal connection between arteries and veins (arteriovenous malformation) in or around the brain. An arteriovenous malformation may be present at birth, but it is usually identified only if symptoms develop. Rarely, a blood clot forms on an infected heart valve, travels (becoming an embolus) to an artery that supplies the brain, and causes the artery to become inflamed. The artery may then weaken and rupture.
Before rupturing, an aneurysm usually causes no symptoms unless it presses on a nerve or leaks small amounts of blood, usually before a large rupture (which causes headache). Then it produces warning signs, such as the following:
The warning signs can occur minutes to weeks before the rupture. People should report any unusual headaches to a doctor immediately.
A rupture usually causes a sudden, severe headache that peaks within seconds. It is often followed by a brief loss of consciousness. Almost half of affected people die before reaching a hospital. Some people remain in a coma or unconscious. Others wake up, feeling confused and sleepy. They may also feel restless. Within hours or even minutes, people may again become sleepy and confused. They may become unresponsive and difficult to arouse. Within 24 hours, blood and cerebrospinal fluid around the brain irritate the layers of tissue covering the brain (meninges), causing a stiff neck as well as continuing headaches, often with vomiting, dizziness, and low back pain. Frequent fluctuations in the heart rate and in the breathing rate often occur, sometimes accompanied by seizures.
About 25% of people have symptoms that indicate damage to a specific part of the brain, such as the following:
Severe impairments may develop and become permanent within minutes or hours. Fever is common during the first 5 to 10 days.
A subarachnoid hemorrhage can lead to several other serious problems:
If people have a sudden, severe headache that peaks within seconds or that is accompanied by any symptoms suggesting a stroke, they should go immediately to the hospital. Computed tomography (CT) is done to check for bleeding. A spinal tap (lumbar puncture) is done if CT is inconclusive or unavailable. It can detect any blood in the cerebrospinal fluid. A spinal tap is not done if doctors suspect that pressure within the skull is increased. Cerebral angiography (see Brain Dysfunction: Aphasia) is done as soon as possible to confirm the diagnosis and to identify the site of the aneurysm or arteriovenous malformation causing the bleeding. Magnetic resonance angiography or CT angiography may be used instead.
About 35% of people die when they have a subarachnoid hemorrhage due to an aneurysm because it results in extensive brain damage. Another 15% die within a few weeks because of bleeding from a second rupture. People who survive for 6 months but who do not have surgery for the aneurysm have a 3% chance of another rupture each year. The outlook is better when the cause is an arteriovenous malformation. Occasionally, the hemorrhage is caused by a small defect that is not detected by cerebral angiography because the defect has already sealed itself off. In such cases, the outlook is very good.
Some people recover most or all mental and physical function after a subarachnoid hemorrhage. However, many people continue to have symptoms such as weakness, paralysis, or loss of sensation on one side of the body or aphasia.
People who may have had a subarachnoid hemorrhage are hospitalized immediately. Bed rest with no exertion is essential. Analgesics such as opioids (but not aspirin or other nonsteroidal anti-inflammatory drugs, which can worsen the bleeding) are given to control the severe headaches. Stool softeners are given to prevent straining during bowel movements. Nimodipine, a calcium channel blocker, is usually given by mouth to prevent vasospasm and subsequent ischemic stroke. Doctors take measures (such as giving drugs and adjusting the amount of intravenous fluid given) to keep blood pressure at levels low enough to avoid further hemorrhage and high enough to maintain blood flow to the damaged parts of the brain. Occasionally, a piece of plastic tubing (shunt) may be placed in the brain to drain cerebrospinal fluid away from the brain. This procedure relieves pressure and prevents hydrocephalus.
For people who have an aneurysm, a surgical procedure is done to isolate, block off, or support the walls of the weak artery and thus reduce the risk of fatal bleeding later. These procedures are difficult, and regardless of which one is used, the risk of death is high, especially for people who are in a stupor or coma. The best time for surgery is controversial and must be decided based on the person's situation. Most neurosurgeons recommend operating within 24 hours of the start of symptoms, before hydrocephalus and vasospasm develop. If surgery cannot be done this quickly, the procedure may be delayed 10 days to reduce the risks of surgery, but then bleeding is more likely to recur because the waiting period is longer.
A commonly used procedure, called neuroendovascular surgery, involves inserting coiled wires into the aneurysm. The coils are placed using a catheter that is inserted into an artery and threaded to the aneurysm. Thus, this procedure does not require that the skull be opened. By slowing blood flow through the aneurysm, the coils promote clot formation, which seals off the aneurysm and prevents it from rupturing. Neuroendovascular surgery can often be done at the same time as cerebral angiography, when the aneurysm is diagnosed.
Less commonly, a metal clip is placed across the aneurysm. This procedure prevents blood from entering the aneurysm and eliminates the risk of rupture. The clip remains in place permanently. Most clips that were placed 15 to 20 years ago are affected by the magnetic forces and can be displaced during magnetic resonance imaging (MRI). People who have these clips should inform their doctor if MRI is being considered. Newer clips are not affected by the magnetic forces.
Last full review/revision November 2007 by Elias A. Giraldo, MD, MS |
First Mars was the setting of imaginary declining civilizations; then it was a dead, cratered, Moon-like world. Thanks to a coordinated Mars exploration program that began in 1996 and continues to the present day, we now know Mars better than any world other than our own, yet we have more questions than ever.
Geologically, Mars is quiescent, but its atmosphere breathes and changes from year to year, interacting in complex ways with the water sequestered in Mars' ice caps and permafrost. Water does not, today, flow on Mars, but it evidently has in the past, and it may flow again in the future when Mars' rotation axis tilts much more steeply. Did Mars ever look like Earth, or has it always been as cold and dry as an Antarctic desert? Has there ever been the right combination of liquid water, available energy, and time to permit life to begin on Mars?
Latest Blogs from Mars
A recently published paper proposes that much of the sedimentary rock on Mars formed during rare, brief periods of very slight wetness under melting snow. |
Basic information about memory chips and programming
We receive frequent inquiries on memory chips and are repeatedly forced to note that there is still a very high requirement for information in this area. For this reason, we are now making the basic information for programming memory chips, such as eproms/eeproms and Flash chips, public here. In particular, we will discuss the various types of memory chips and compare what the 27C, 28C or 29F series, for instance, can and cannot do.
- What is a memory chip?
- Organization of a memory chip
- EPROM memory chips (27 / 27C...)
- EEPROM memory chips (28C...)
- FLASH EPROMS (28F..., 29C..., 29F...)
- Serial EEPROMS (24C..., 25C..., 93C...)
- RAM (52..., 62...)
- NVRAM (48Z..., DS12..., XS22...)
- Erasure of eproms with ultraviolet light
- Memory chip names and how to find replacement chips
A memory chip is an electronic component in which a program, data or both can be stored. In this context, a program is a series of commands (command string) for a microprocessor (= computing unit). Data could consist, for instance, of temperature values taken by a temperature measurement system, or any other data.
The program / data are stored in the memory chip as a series of numbers - zeros and ones (=bits). A bit can be either a zero or a one. It is difficult for a person to gain an overview over these bits; therefore, they are gathered into groups. Sixteen bits are a "word", eight bits are a "byte" and four bits are a "nibble".
The most commonly used term is the byte, which contains 8 bits and can accept 2 to the 8th power = 256 different values. In order to represent these, the hexadecimal number system is used. This is based on a number of 16 and uses the digits 0 to 9 and additionally, A to F. Therefore, two digits can also accept 256 values (from 00h to FFh, wherein the small "h" only identifies the hexadecimal number). We would like to direct those who need more precise information about the number systems to suitable other locations.
The terms kilo and mega with regard to bytes were also adapted to the binary nature (zero or one) of the digital systems. Here, kilo means 1024 (= 2 to the 10th power) and mega means 1024 * 1024 = 1048576. Therefore a kilobyte is 1024 bytes and a megabyte is 1048576 bytes.
For the 8 bit memory chips (the most common type) the bits are put together in a byte (= 8 bits) and stored under an "address". The bytes can be accessed at this address and then the eight bits of the accessed address are output on its eight data ports. For example, in an 8-megabit chip like the 27c801, there are altogether 1048576 bytes (= 8388608 bits). Each byte has its own address, numbered from 00000h through FFFFFh (corresponding to the decimal 0 to 1048575).
Aside from the 8 bit memory chips, there are also 16 bit memory chips, serial 1-bit memory chips and (rarely/old) 4 bit chips.
"Erasable" means that the data on it can be removed. With these chips, erasure is carried out by exposure to intensive ultraviolet light in the area of 254 nm wavelength. We deal with erasing eproms with UV-C light in further detail below.
"Programmable" means that a program or data can be programmed (burned) into this chip. For programming, a programming device such as the Batronix Eprommer or the Galep-4 is required.
"Read Only Memory" means that this type of memory can be read out but not programmed in the target device.
This memory type can be burned (programmed) by a programming device and then retains its data until an erasing device erases it. During the programming process, any desired number of bits from one to zero can be programmed. Eproms can also be programmed repeatedly without being erased as long as the bits are only changed from one to zero or remain on zero. To change a bit from zero to one, erasure is necessary.
Since the quartz glass window required for erasure with UV-C light is a big part of the production costs for the chip, this chip is available with and without this window. Without the window, the chip cannot be erased using UV-C light. The eproms with windows are also called UV eproms; the ones without windows are called OTP (=One Time Programmable) eproms.
After programming an eprom that is erasable with UV-C light, the glass window should be closed with a sticker so no sunlight can enter. Sunlight also contains components of UV-C light and can eventually erase data from the eprom.
In the name of an EPROM, the "C" after the 27 indicates that it is a CMOS EPROM (CMOS=Complimentary Metal Oxide Semiconductor). These require a much lower performance than the old NMOS EPROMS and can function with lower programming voltages (12.5 volts) (NMOS=N-channel Metal Oxide Semiconductor). Since both chips are otherwise compatible, the old NMOS EPROMS can be replaced with CMOS EPROMS of the same size (e.g. a 2764 can be replaced by a 27C64).
The name EEPROM stands for Electrically Erasable Programmable Read Only Memory. These are constructed like EPROMS, but allow the erasing of individual bytes or the entire memory space electrically without UV light. Since individual bytes can be erased without erasing everything, these individual bytes can be overwritten, in effect. However, with an EEPROM, the burning process clearly takes longer than with an EPROM - up to several milliseconds per byte. To make up this disadvantage, EEPROMS like AT28C256's were equipped with a function for the programming of so-called blocks. In this process, 64, 128 or 256 bytes at once are loaded into the memory chip and programmed simultaneously as a block. This clearly shortens programming times.
The additional internal cost for electrical erasure as well as the block writing function, if desired, makes the EEPROMS more expensive than the EPROMS.
These chips can be erased electrically - completely or by the block - and some - like the AT28C... with the EEPROMS) can be programmed by the block as well. The Flash EPROMS, however, cannot always be used as a replacement for a normal eprom. Reasons include, for instance, that the Flash eproms, even the ones with a small amount of memory space, are only available in housings with 32 or more pins. A 28F256 with 32 pins is therefore not pin compatible to a 27C256 with 28 pins and the same memory capacity.
With these chips, serial means that data output and address naming takes place bit by bit (=serially). This means that only one bit at a time can be accessed, and the accessed address must be communicated bit by bit as well, but it has the major advantage that the serial EEPROM comes with a small 8-pin housing. These chips are therefore popular when space or accessing cables are to be saved and no large amounts of data or high speeds are required.
The name RAM stands for "Random Access Memory" (= memory with selectable access). These memory devices can be written to very quickly (in this case, it is generally referred to as writing, not burning) and each byte can be overwritten just as quickly and easily, i.e. it does not need to be erased first. The disadvantage of this technology is that the chips lose their memory space when the power supply is cut off.
The name NVRAM stands for Non Volatile Random Access Memory. These chips have the major advantages of the RAM chips (very high speed and easy overwriting of existing data) and retain their data when power is cut off.
This can be achieved in two ways: The first group removes the disadvantage of the original RAMS with a built-in battery that protects the memory space from losing its data when the power is cut off. According to the manufacturer, the battery lasts for ten years according to type.
The second group has an equally large EEPROM and when the power is cut off, it stores all data from the RAM on the EEPROM. When the power is restored, data are EEPROM copied back into the RAM. The advantages of fast RAM access and easy overwriting remain.
A microcontroller is a complete system, consisting of the CPU (computing unit/microprocessor), the programming memory (FLASH or EPROM), working memory (RAM) and in/output on a chip. These chips are put into many devices as "mini-PC's" and guide, for instance, printers, heaters, microwaves, alarm clocks etc..
With these chips, erasure takes place by exposure to intensive ultraviolet light in the area of 254 nm wavelength. Since UV-C light is very dangerous to the eyes and also carcinogenous, these chips are erased in special eprom erasure devices. These only allow the light to be turned on after the housing is closed. When the housing is opened, the light is immediately switched off. Erasure takes 5 to 25 minutes, varying with light intensity and other conditions.
We have often been asked whether eproms can also be erased with a face-tanning device or similar. This is, however, not possible, since the UV-C wavelength of the light is filtered out in these devices. Erasure using daylight is, on the other hand, possible, since sunlight contains the required wavelength. This is, however, not of practical use since it would require a few weeks of bright sunshine.
The name of a memory chip contains the abbreviation for the manufacturer, the technology, the memory size, the fastest permitted accessing speed, the temperature range, the form of housing as well as further internal manufacturer's data. Different manufacturers often use very different names, however the chips with similar data under the various manufacturers are usually compatible.
It takes practice to correctly interpret the name of a memory chip. But it generally does not take long to learn and once learned, it is normally easy to determine a replacement type. A replacement type should utilize the same technology (EPROM/ EEPROM/ FLASH/ etc.), have the same size of memory and the same or a shorter access time and if applicable, the same or a better temperature range.
In the case of an existing memory chip, one first looks for the technology description on the housing, e.g. 27C, 28C, 29F etc.. An abbreviation for the manufacturer is usually in front of it (e.g. AT for Atmel). After it, one finds the memory size in bits, which can be given in different ways according to the manufacturer:
Selected possible memory sizes:
16 = 16 KBit
32 = 32 KBit
64 = 64 KBit
128 = 128 KBit
256 = 256 KBit
512 = 512 KBit
1001 or 010 = 1 MBit
2001 or 020 = 2 MBit
4001 or 040 = 4 MBit
8001, 080 or 801= 8 MBit
016 = 16 MBit
It should be noted that memory size is given in bits and not in bytes. After the memory size, there may be a version name, such as "B", and then a hyphen. After the hyphen, the fastest permitted access speed is given in nanoseconds (1/1000000000 second). This is the maximum delay time between the inputting of an address and the outputting of the data to the ports of the memory chips. This entry takes some getting used to as well, since it is given in two digits:
Selected possible access speeds:
45 = 45 ns
60 = 60 ns
70 = 70 ns
90 = 90 ns
10 = 100 ns
12 = 120 ns
15 = 150 ns
20 = 200 ns
25 = 250 ns
After the maximum access speed, there is an abbreviation for the housing type and the permitted temperature range. Since these can vary, one should check the data sheet if in doubt. Data sheets can be easily located via search engines such as www.google.com, using the term for the chip + the word "datasheet" as search terms (for example, 27c256 +datasheet).
Knowing this, the label M27C1001-10F1 now tells us that it is an eprom (=27C) with 1 MBit of memory (=1001) with an access time of 100 ns (=10) in the DIP housing (=F) with a permissible temperature range of 0 to 70 degrees Celsius (=1).
In a further line of labeling on the memory chip, one then finds the date of manufacture (the date code). This is the year (given in two digits) and the calendar week. A chip with a date code 0109 is therefore from the 9th calendar week in 2001. |
How Likely Is That?
Shelly Berman email@example.com
We use numbers to tell the measures of many different things - time, length, money, and weight are examples.
We can even use numbers to describe how likely it is that something will happen. This measure is called a probability.
We use a scale from 0 to 1 to describe the probability that something will happen.
If something ALWAYS happens, it has a probability of ONE. For example, time continues to move forward. We are certain that this is the case, so we can say the probability of time continuing is 1.
If something NEVER happens, it has a probability of ZERO. For example, no one can jump over a big building (at least not without a rocket, or spring or something). Because we are certain of this, we can say the probability of someone jumping over a building without help is 0.
Go on to Page 2
Home || The Math Library || Quick Reference || Search || Help |
Reading to your child is not something you do just to help them fall asleep Reading to your child helps enhance their ability to learn through listening, repetition and visual stimulants. What better way to teach the importance of reading by showing them you read yourself? Reading to them has a positive effect on a child’s attitude toward reading and their ability to read. Help your child open up the world of reading, writing and imagination.
Why is reading important?
According to Kyla Boyse, RN: “A child’s reading skills are important to their success in school and work. In addition, reading can be a fun and imaginative activity for children, which opens doors to all kinds of new worlds for them. Reading and writing are important ways we use language to communicate.”
Choosing the right book using these rules:
- Pick a random page from a book.
- If your child has problems reading more than 4 words on that page, that book might be too hard for them.
- If your child doesn’t have any problems reading the book, then it is too easy and they need to pick a little more challenging book.
The book needs to have some challenging words, but not too many. The goal is to help the child comprehend and enjoy the book, while at the same time learn new words.
Tackle a new words:
- Ask your child to sound out an unknown word.
- Help them memorize irregular words.
- Use suffixes, prefixes, and root words.
Support & Encourage:
Challenge your child to sound out new words, but always supply the word before the frustration sets in. After your child has read a story, reread it aloud yourself so that they can enjoy it without interruption. Help them understand the importance of reading.
Make reading a priority:
Set aside 10 minutes to an hour every day to read to your child or have them read to you. This will get them in a good habit of reading and helps them become interested in reading.
Creating the right atmosphere:
Don’t turn on the TV and distract the child. Help them find a quiet place to read. Be a good role model and read a book in front of your child while they are reading their book. That will help support the value of reading.
Make reading fun & reading aloud to your child:
Reading to your child is a simple and pleasant process. Read books beyond their reading level and build their vocabulary by exposing new words. Reading aloud is also a good way for you to model reading smoothly and with expression. Make sure you choose a new book every time you read to them. It can help keep them interested and explore their imagination. And not all reading has to be done with a book Toys and games can provide them with opportunities to learn new words and the achievement of getting a word right.
Any way you can provide a way to get your child to learn and enjoy reading will help them as they grow and develop communication skills. |
On this day: The Wannsee conference
January 20 1942: A meeting about how to exterminate Europe’s Jews
A peaceful lakeside spot in a sleepy suburb of Berlin, an uninformed visitor to Wannsee might be quite charmed by the place.
But the villa there has a chilling history – it was there, 69 years ago, that 15 Nazi leaders coined the term “Final Solution” and coordinated the genocidal campaign it would involve.
Those gathered at the conference included the man who ran the Gestapo, Reinhard Heydrich, his deputy, Adolph Eichmann and Dr Joseph Bühler, secretary of state for the general government.
Over some 40 minutes, during which food and drink was served, the fifteen discussed the “final solution of the Jewish question in Europe”. They spoke of combing Europe of Jews and addressed the status of the Mischlinge [children of mixed marriages] and what to do about Jews married to Germans.
As the Wannsee Protocol, the minutes of the meeting, show, Heydrich told those assembled that the 11 million Jews of Europe would be rounded up. The numbers included Jews in countries the Nazis had not yet invaded, from the UK to Portugal.
He said: “Any final remnant that survives will consist of the most resistant elements. These will have to be dealt with appropriately, because otherwise, by natural selection, they would form the germ cell of a new Jewish revival.”
Nearly 70 years later, the house at Wannsee still stands, as a museum dedicated to the memory of this dark and appalling point in history.
What the JC said: The leading figures in Adolf Hitler’s circle met not to discuss the ways and means of war, but to plot the murder of civilians across Europe – civilians set apart by one face only. They were Jews. The “Final Solution” came terrifyingly close to succeeding. Its ultimate defeat was a tribute to many – would-be victims, foes of Nazism who risked their lives rather than abet Hitler’s evil.
See more from the JC archives here |
6th Grade Worksheets - Summarize and describe distributions.
Display numerical data in plots on a number line, including dot plots, histograms, and box plots.
Summarize numerical data sets in relation to their context, such as by:
- Reporting the number of observations.
- Describing the nature of the attribute under investigation, including how it was measured and its units of measurement.
- Giving quantitative measures of center (median and/or mean) and variability (interquartile range and/or mean absolute deviation), as well as describing any overall pattern and any striking deviations from the overall pattern with reference to the context in which the data were gathered.
- Relating the choice of measures of center and variability to the shape of the data distribution and the context in which the data were gathered. |
The first drum sets were put together in the late 1800s sometime after the invention of the bass drum pedal. This invention made it possible for one person to play several percussion instruments (snare drum, bass drum, and cymbals) at one time. The set developed as it was used to accompany jazz musicians in New Orleans during the 1920s.
As new instruments were introduced to the drum set (tom-toms and the high hat cymbal) in the late 1920s and 1930s, new techniques developed. Gene Krupa, one of the greatest jazz drummers of the big band era, highlighted tom-toms in his pieces and did solos using the drum set as the featured instrument.
The drum set, also commonly referred to as drum kit, is a collection of percussion instruments which is played by one musician. It usually includes a bass drum, a snare drum, several cymbals, and tom toms. Other percussion instruments such as cowbells and woodblocks are sometimes included.
This drum is the largest member of the set and is played by using a foot pedal attached to a beater which then strikes the drum head. This drum produces a low deep sound.
This shallow, cylindrical drum produces a sound that is very distinctive to the drum (higher in pitch than the bass drum). The snares, which are bands of metal wires, are pulled across the bottom head of the drum. This produces a buzzing or snapping sound when the drum is struck using a variety of techniques.
Cymbals are made of various combinations of metals and are usually six to twenty-two inches in diameter. The most important cymbals in the drum set are:
hi-hat- this horizontally mounted pair of cymbals can either be hit with a stick or closed on each other with a foot pedal.
crash cymbal and ride cymbal- two commonly used cymbals in a drum set. Both are hit with sticks and, depending on their size, produce varied sounds.
tom-toms- a drum set usually has three tom-toms. One is on the floor and the other the other two are mounted on the bass drum.
The timpani is often called a kettledrum because it is shaped like a kettle. The timpani has a large copper or fiberglass shell with a single drumhead. It also has a pedal mechanism which allows the musician to adjust the tension of the drumhead, thereby tuning the drum to different pitches. This makes the timpani the only drum which can produce definite musical notes. To produce the deep tone of the timpani, its drumheads are hit with mallets. Mallets are made of soft and hard felt or wood and will produce different tones on the timpani. Timpani are most often played in pairs or groups of four.
OTHER PERCUSSION INSTRUMENTS
There are many instruments included in the percussion family commonly known as "toys". Some examples of these would be: cymbals, triangle, gong, maracas, tambourine, and hand drums.
Cymbals, thin round concave plates (usually made from copper-tin alloy), have been known since the Middle Ages. Often used in religious ceremonies, they became part of the orchestra around the 18th century and are played by dashing two together or by being struck separately by beaters.
The triangle is another commonly used percussion instrument. The instrument is made by bending a steel rod into a triangle shape with an opening at one corner. It is suspended by a string and struck with a steel beater to produce a tone. The instrument has been used in Europe since the 14th century.
Little known facts:
Up until the 1800s, the triangle often had jingling rings strung on it.
Franz Liszt, a Hungarian composer, included a triangle solo in his first piano concerto written in 1849.
The gong is a bronze disk which, when struck by a beater, produces a rich ringing sound. Many gongs have a central dome and a turned down outside rim. The gong has obscure origins in the Middle East or South East Asia and by the 9th century had migrated to Indonesia. The gong then made its way to Europe by the 18th century.
Maracas are egg-shaped musical rattles that are played in pairs. They originated in South America and were first made from dried gourd shells that were filled with beans or beads. A handle was attached so the gourd shells could be shaken. Today maracas are made from plastic or wood. They are often used in Latin American music.
A tambourine is a single-headed frame drum that has jingling metal disks set in its frame. It can be struck, shaken, or rubbed to produce a tone.
Little known facts:
In ancient and prehistoric times and in medieval Europe, the tambourine was traditionally a woman's instrument and continues to be so today in Islamic countries.
The xylophone is a mallet percussion instrument. It consists of a set of graduated wooden bars which are hit with mallets to produce a tone. Xylophones were used in Southeast Asia during the 1300s and spread to Africa, Latin America, and Europe.
Little known facts:
The xylophone's first orchestral use was in Dance Macabre (1874) by French composer Camille Saint-Saens.
The harp is a stringed instrument and produces a sound by plucking the strings which are perpendicular to the body of the instrument. The strings themselves run between a neck and a sound box also known as the body or resonator. There are several types of harps that are classified bassed on their shape:
Arched Harp - the neck and body form a bow-like curve.
Angular Harp - the body and neck form a right angle.
Frame Harp - has a third piece called a forepillar which is placed opposite the neck and body creating a triangle.
The modern orchestral harp has forty-six strings. The instrument has six and a half octaves with no accidentals. To produce sharp or flat notes, pedals which control strings in each octave are depressed to certain degrees thereby creating different steps.
Arched harps are the most ancient harps and date back to Sumerian and Egyptian times. Frame harps did not appear until the 9th century in Europe. Almost immediately, a new version, called the Irish harp, developed with a few adjustments which made this harp unique. Medieval harps also developed and were smaller and lighter than other harps. These Gothic harps were the ancestors of the folk harps of Latin America. Later in the harp's history, a second row of strings were added which allowed the harp to produce a wider range of notes.
Back to Top Music Styles | Music Theory | Music History | Musical Instruments | Music Professions | Music Links | Music Games | Glossary | Guestbook | Message Board | Search Music Notes | Meet the Treble Rebel Team | Music Notes Home |
Order - Psittaciformes
Parrots, are birds of the roughly 372 species in 86 genera that make up the order Psittaciformes, found in most tropical and subtropical regions. The order is subdivided into three families: true parrots, the cockatoos and the New Zealand parrots. Parrots have a tropical distribution with several species inhabiting temperate regions in the Southern Hemisphere as well. The greatest diversity of parrots is found in South America.
Characteristic features of parrots include a strong, curved bill, an upright stance, strong legs, and clawed zygodactyl feet. Many parrots are vividly colored, and some are multi-colored. The plumage of cockatoos ranges from mostly white to mostly black, with a mobile crest of feathers on the tops of their heads. Most parrots exhibit little or no sexual dimorphism. They form the most variably sized bird order in terms of length. |
Archaea comes from the Greek word meaning “ancient.” An appropriate name, because many archaea thrive in conditions mimicking those found more than 3.5 billion years ago. Back then, the earth was still covered by oceans that regularly reached the boiling point — an extreme condition not unlike the hydrothermal vents and sulfuric waters where archaea are found today.
Some scientists consider archaea living fossils that may provide hints about what the earliest life forms on Earth were like, and how life evolved on our planet.
Archaea can be found in many"extreme environments": highly sulfurous lakes (right); ); ice (top left); Utah‘s Great Salt Lake (above middle); and hot geysers like the Lonestar (above middle); and in undersea hydrothermal vents (above right).
In addition to superheated waters, archaea have been found in acid-laden streams around old mines, in frigid Antarctic ice and in the super-salty waters of the Dead Sea. A number of other extreme-living bacterial species also enjoy these conditions, too, such as the community of cyanobacteria and bacteria shown above.
Thermophiles like unusually hot temperatures. A few species have been found to survive even above 110 degrees Celsius (water boils at 100 degrees Celsius).
Psychrophiles like extremely cold temperatures (even down to -10 degrees Celsius).
Archaea that populate extreme environments (along with some of their bacterial cousins) have developed some clever tricks and tools to do so. For example, they produce special enzymes that help keep all the parts of their cells intact even in conditions that would have our human skin falling apart.
Many archaeans thrive in conditions that would kill other creatures: boiling water, super-salty pools, sulfur-spewing volcanic vents, acidic water and deep in Antarctic ice. These types of archaea are often labeled "extremophiles," meaning creatures that love extreme conditions.
Archaeans have been found that can live in temperatures above 212°F (100°C). In contrast, no known eukaryotes can survive over 140°F (60°C). Other archaeans have been found in an Antarctic lake with a surface that is permanently frozen.
How do these extremophiles do it? They make a variety of protective molecules and enzymes (en-zimes). For example, some archaeans live in highly acidic environments. If the acid got into the archaeal cells, it would destroy their DNA, so they have to keep it out. But the defensive molecules on their cellular surfaces do come into contact with the acid and are uniquely designed not to break apart in it. Archaeans that live in very salty water are able to keep all the fluid from dissolving out of their cells by producing or pulling in from the outside solutes such as potassium chloride that balance the inside of the cells with the salty water outside. Other enzymes allow other achaeans to tolerate extreme hot or cold.
Not all the archaea are extremophiles. Many live in more ordinary temperatures and conditions. For example, scientists can find archaeans alongside bacteria and algae floating about in the open ocean. Some archaeans even live in your guts. |
Science Fair Project Encyclopedia
The Solar System consists of the Sun and all the objects that orbit around it, including asteroids, comets, moons, and planets. The Earth is the third planet of the Solar System. Planetary systems are a more generic term for stars and the objects that orbit around them.
Solar system objects
The wide variety of objects that exist in the solar system fall into several categories. In recent years many of these categories have been found to be less clear-cut than once thought. This encyclopedia employs the following divisions:
- The Sun is a spectral class G2 star that contains 99.86% of the system's mass.
- The planets of the solar system are those nine bodies traditionally labelled as such: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.
- Sizeable objects that orbit these planets are moons. For a complete listing, see that article.
- Dust and other small particles that orbit these planets form planetary rings.
- Space debris of artificial origin that can be found in orbit around Earth.
- Planetesimals, from which the planets were originally formed, are sub-planetary bodies that accreted during the first years of the solar system and that no longer exist. The name is also sometimes used to refer to asteroids and comets in general, or to asteroids below 10 km in diameter.
- Asteroids are objects smaller than planets that lie roughly within the orbit of Jupiter and are composed in significant part of non-volatile minerals. They are subdivided into asteroid groups and families based on their specific orbital characteristics.
- Asteroid moons are asteroids that orbit larger asteroids. They are not as clearly distinguished as planetary moons, sometimes being almost as large as their partners.
- Trojan asteroids are located in either of Jupiter's L4 or L5 points, though the term is also sometimes used for asteroids in any other planetary Lagrange point as well.
- Meteoroids are asteroids that range in size from roughly boulder sized to particles as small as dust.
- Comets are composed largely of volatile ices and have highly eccentric orbits, generally having a perihelion within the orbit of the inner planets and an aphelion beyond Pluto. Short-period comets exist with apoapses closer than this, however, and old comets that have had most of their volatiles driven out by solar warming are often categorized as asteroids. Some comets with hyperbolic orbits may also originate outside the solar system.
- Centaurs are icy comet-like bodies that have less-eccentric orbits so that they remain in the region between Jupiter and Neptune.
- Trans-Neptunian objects, which are icy bodies whose semi-major axes lie beyond Neptune's. These are further subdivided:
- Kuiper belt objects have orbits lying between 30 and 50 AU (astronomical units, an AU is approximately equal to the mean distance between Earth and Sun). This is thought to be the origin for short-period comets. Pluto is sometimes classified as a Kuiper belt object in addition to being a planet, and the Kuiper belt objects with Pluto-like orbits are called Plutinos. The remaining Kuiper belt objects are classified as Cubewanos in the main belt and scattered disk objects in the outer fringes.
- Oort cloud objects, currently hypothetical, have orbits lying between 50,000 and 100,000 AU. This region is thought to be the origin of long-period comets.
- The newly discovered object 90377 Sedna, with a highly elliptical orbit extending from about 76 to 850 AU, does not obviously fit in either category, although its discoverers argue that it should be considered a part of the Oort cloud.
- Small quantities of dust are present throughout the solar system and are responsible for the phenomenon of zodiacal light. Some of the dust is likely interstellar dust from outside the solar system.
Jupiter constitutes most of the mass of the solar system outside the Sun: 0.1% of the mass of the solar system. In turn, Saturn constitutes most of the remaining mass, then Uranus and Neptune, then Earth and Venus (see also below).
Origin and evolution of the Solar System
The Solar System is believed to have formed from the Solar Nebula, the collapsing cloud of gas and dust which gave birth to the Sun. As it underwent gravitational collapse, the Solar Nebula would have collapsed into a disk, with the protosun accreting at the centre. As the protosun heated up, volatile substances were driven away from the central regions of the nebula - hence the formation of rocky planets closer to the sun and gas giants further out.
For many years, our own system was the only planetary system known, and so theories only had to explain one system to be plausible. The discovery in recent years of many external systems (see Exoplanet) has uncovered systems very different to our own, and theories of planetary system formation have had to be revised accordingly. In particular, many external systems contain a hot Jupiter - a planet comparable to or larger than Jupiter orbiting very close to the parent star, perhaps orbiting it in a matter of days. It has been hypothesised that while the giant planets in these systems formed in the same place as the gas giants in our system did, some sort of migration took place which resulted in the giant planet spiralling in towards the parent star. Any terrestrial planets which had previously existed would presumably either be destroyed or ejected from the system.
Galactic orbit of the solar system
Estimates place the solar system at between 25,000 and 28,000 light years from the galactic center. Its speed is about 220 kilometres per second, and it completes one revolution every 226 million years. At the galactic location of the solar system, the escape velocity with regard to the gravity of the Milky Way is about 1000 km/s.
The solar system appears to have a very unusual orbit. It is both extremely close to being circular, and at nearly the exact distance at which the orbital speed matches the speed of the compression waves that form the spiral arms. The solar system appears to have remained between spiral arms for most of the existence of life on Earth. The radiation from supernovae in spiral arms could theoretically sterilize planetary surfaces, preventing the formation of large animal life on land. By remaining out of the spiral arms, Earth may be unusually free to form large animal life on its surface.
Discovery and exploration of the solar system
Because of the geocentric perspective from which humans viewed the solar system, its nature and structure were long misperceived. The apparent motions of solar system objects as viewed from a moving Earth were believed to be their actual motions about a stationary Earth. In addition, many solar system objects and phenomena are not directly sensible by humans without technical aids. Thus both conceptual and technological advances were required in order for the solar system to be correctly understood.
The first and most fundamental of these advances was the Copernican Revolution, which adopted a heliocentric model for the motions of the planets. Indeed, the term "solar system" itself derives from this perspective. But the most important consequences of this new perception came not from the central position of the Sun, but from the orbital position of the Earth, which suggested that the Earth was itself a planet. This was the first indication of the true nature of the planets. Also, the lack of perceptible stellar parallax despite the Earth's orbital motion indicated the extreme remoteness of the fixed stars, which prompted the speculation that they could be objects similar to the Sun, perhaps with planets of their own.
Since the start of the space age, a great deal of exploration has been performed by unmanned space missions that have been organized and executed by various space agencies. The first probe to land on another planet or moon was the Soviet Union's Luna 2 probe, which impacted on the moon in 1959. Since then, increasingly distant planets have been reached, with probes landing on Venus in 1965, Mars in 1976, the asteroid 433 Eros in 2001, and Saturn's moon Titan in 2005. Spacecraft have also made close approaches to other planets: Mariner 10 passed Mercury in 1973, while the Voyager probes performed a grand tour of the solar system following their launch in 1977, with both probes passing Jupiter in 1979 and Saturn in 1980-1981. Voyager 2 then went on to make close approaches to Uranus in 1986 and Neptune in 1989. The Voyager probes are now far beyond Pluto's orbit, and astronomers anticipate that they will encounter the heliopause which defines the outer edge of the solar system in the next few years.
Through these unmanned missions, we have been able to get close-up photographs of most of the planets and, in the case of landers, perform tests of their soil and atmosphere. Manned exploration, meanwhile, has only taken human beings as far as the Moon, in the Apollo program. The last manned landing on the moon took place in 1972, but the recent discovery of ice in deep craters in the polar regions of the moon has prompted speculation that mankind may return to the moon in the next decade or so. The long-mooted manned mission to Mars does not currently look like coming to fruition in the near future.
The solar system and other planetary systems
Until recently, the solar system was the only known example of a planetary system, although it was widely believed that other comparable systems did exist. A number of such systems have now been detected, although the information available about them is very limited. See extrasolar planet for more information.
Attributes of major planets
<timeline> ImageSize = width:200 height:640 PlotArea = width:140 height:600 left:50 bottom:20 AlignBars = justify
Period = from:0 till:7500 TimeAxis = orientation:vertical ScaleMajor = unit:year increment:1000 start:0 ScaleMinor = unit:year increment:500 start:0
width:15 color:blue align:left shift:(15,-5) from:46 till:70 text:"Mercury" from:107 till:109 align:right shift:(-15,-5) text:"Venus" from:147 till:152 shift:(15,0) text:"Earth" from:207 till:249 align:right shift:(-15,0) text:"Mars" from:314 till:494 shift:(15,5) color:yellow text:"Asteroid~ belt" from:741 till:816 text:"Jupiter" from:1347 till:1507 text:"Saturn" from:2735 till:3004 text:"Uranus" from:4425 till:7375 color:brightgreen text:"Pluto" from:4456 till:4537 text:"Neptune"
All attributes below are measured relative to the Earth:
| Orbital period|
Of the other objects, Ganymede has the largest mass (0.02).
See for a more comprehensive table.
Attributes of selected minor planets
Some objects are intermediate in size between planets and the lumps of rock called asteroids. These mid-sized objects are now often called 'planetoids' or minor planets: most scientists consider them too small to be "true" planets, while a few scientists point out that these minor planets exhibit the same gravitational forces which affect major planets.
Just one planetoid, Ceres, lies in the inner reaches of the Solar System. All other planetoids occur at the fringe of our planetary system.
All attributes below are measured relative to the Earth:
|1 Ceres||0.075||0.000 158||2.767||4.603||0.3781|
|90482 Orcus||0.066 - 0.148||0.000 10 - 0.001 17||39.47||248||?|
|28978 Ixion||~0.083||0.000 10 - 0.000 21||39.49||248||?|
|(55636) 2002 TX300||0.0745||?||43.102||283||?|
|20000 Varuna||0.066 - 0.097||0.000 05 - 0.000 33||43.129||283||0.132 or 0.264|
|50000 Quaoar||0.078 - 0.106||0.000 17 - 0.000 44||43.376||285||?|
|90377 Sedna||0.093 - 0.141||0.000 14 - 0.001 02||76-990||11500||20|
It has been suggested that the Sun may be part of a binary star system, with a distant companion named Nemesis. Nemesis was proposed to explain some regularities of the great extinctions of life on Earth. The theory says that Nemesis creates periodical perturbations in the asteroids and comets of the solar system causing a shower of large bodies and some of them hit Earth causing destruction of life, although this theory is no longer taken seriously by most scientists.
The solar system in small scales
Scaling down the size of the Solar System makes it easier for students to grasp the relative distances. The enormous ratio of interplanetary distances to planetary diameters makes constructing a scale model of the solar system a challenging task. (For example, the distance between the Earth and the Sun is almost 12,000 times the diameter of the Earth.) Several places have built such models. See main article: Solar system model.
- Astrological age
- Astronomical symbols
- Geological features of the Solar System
- Laws of Kepler
- Category:Lists of solar system objects
- Numerical model of solar system
- Origin of life
- Planetary system
- Planetary pairs
- Planetary nomenclature
- Solar system by size
- Table of planetary attributes
- Timeline of solar system astronomy
- Titius-Bode law
- Zodiacal light
- NASA's Solar System Simulator
- NASA/JPL Solar System main page
- Celestia Free 3D realtime space-simulation (OpenGL)
- The Nine Planets Comprehensive Solar system site by Bill Arnett
- Planetary data
- Stars and Habitable Planets
- Solar System An interactive planets animation (145 zoom steps and time effects)
- Timeline of solar system exploration
- An Atlas of the Universe |
Photo: EUSKALANATO (flickr)
Our development of larger brains had much to do with our evolution as a species. But the question is, what fueled the development of those larger brains?
The most popular theory among anthropologists is that we switched to an omnivorous diet. Our ancestors are believed to have evolved in a less forested, more open grassland environment than our cousins, the chimpanzees. This would have given them a clear view of large herbivores. Cut marks on animal bones suggest that around 2.5 million years ago our ancestors did indeed begin hunting. Because meat is rich in nutrients, including essential amino acids, it may have spurred the increase in brain size.
The trouble is that the big increase in brain size occurred with Homo erectus, which appeared just 1.8 million years ago. If meat was largely responsible for the change, then why are there cut marks on bones nearly a million years earlier?
Other anthropologists think it’s because meat-eating isn’t the key at all, that the real key is cooked tubers, like yams.
They say an increase in calories is what’s important when it comes to the development of larger brains, and that cooked tubers would have provided just such a boost. And root vegetables are believed to have been plentiful in the environment where humans evolved
Cooking would have made tubers easier to digest, and therefore richer in calories. Cooking is the target of major criticism though. Most archaeologists believe humans didn’t begin cooking until about 250,000 years ago. |
Growth of the Empire
The Early Years
The beginnings of the British Empire took place in the 16th century when efforts to discover new routes to the Far East resulted in many voyages for discovery and trade. The next two centuries saw the establishment of colonies in North America and trade established with India and the Far East. Such men as Cabot, Frobisher, Drake, Hudson, and Cook carried the English flag to far corners of the earth.
Drake’s victory over the Spanish Armada, 1588, established Great Britain’s sea power, without a serious challenger, and opened the door to world trade and empire.—Courtesy Culver Service.
First permanent British colony in North America was established at Jamestown, Virginia, in 1607. In the terrible winter of starvation of 1609–10 the settlers were reduced from 500 to 60.
Before British control was established in India, trade was carried on. This scene of English traders in India dates from about 1612.
By capturing Gibraltar from Spain in 1704, the English gained a key to control of the Mediterranean.
After the capture of Quebec in 1759, shown here, and Montreal a year later, France ceased to threaten British dominance in North America.
After the American Revolution the Canadian colonies were extended westward, settlements were made in Australia, New Zealand, and Africa; India came under British rule; and other new British colonies were established. London became the center of world commerce.
Early view of Sydney, 1809, 21 years after the founding of the first Australian colony
Cape Town shortly before 1820 when Britain strengthened the colony by assisting 4,000 emigrants to unoccupied land in South Africa
British troops put down a series of Indian mutinies in the 1850’s. Here they march before Government House, Calcutta.
Hong Kong, 1845, a few years after the Chinese war which resulted in China’s ceding Hong Kong and opening five treaty ports to the British
Wellington, New Zealand, shortly after it was settled by British colonists in 1840
West Indian slaves hear the news of their emancipation in 1833, when slavery was abolished in the Empire.
The Empire Comes of Age
Progress in trade and communication after the 1850’s caused many British emigrants to seek new homes in the colonies where discoveries of valuable minerals and rich land brought great expansion. Development of the steamship and locomotive, opening the Suez Canal, new colonial policies giving dominion status and self-government to many colonies, all contributed to strengthening the Empire.
London in 1851 had become the center of world trade. Ships from all over the globe brought raw materials there and took out finished goods.
Adventurers flecked to Australia after gold was discovered in 1851. Population of Victoria rose from 77,000 in 1851 to 333.000 in 1855.—Courtesy Museum of Modern Art.
The landing of the first permanently successful transatlantic cable on the shores of Newfoundland in 1866 was achieved after many disheartening attempts. A telegraphic link was now established with Britain.—Courtesy Western Union Telegraph Co.
The Suez Canal, completed 1869, became a vital link to the East.—Courtesy Culver Service.
British defeat Allied Republics of South Africa in Boer War, 1899–1902. The Union of South Africa was formed and granted self-government, 1909.
The completion of the Canadian Pacific Railway, 1885, opened a route across Canada and resulted in settling the prairie provinces.—Courtesy Canadian Pacific Railway. |
Early Mayan women were a powerful force
Women may have played a more important part in Mayan culture and much earlier than archaeologists once thought, a new find suggests.
Researchers working in Guatemala have unearthed a monument with the earliest-known depiction of a woman of authority in ancient Mayan culture, says the Canadian leader of the international research team.
The 2 metre high limestone monument, called a stela, has a portrait of a female who could be either a ruler or a mythical goddess, says Associate Professor Kathryn Reese-Taylor, a University of Calgary archaeologist.
The stela may date from the late 4th century AD, making it as much as 200 years older than previously discovered monuments depicting powerful Mayan women, says Reese-Taylor, whose team includes Professor Peter Mathews, from La Trobe University in Australia.
"We have images of queens, who ruled singly and with their husbands and sons, depicted on stelae later in Maya history beginning in the early 6th century AD. But this stela is completely unique in style and likely dates to the 4th century AD," Reese-Taylor says.
"It's unique in that it shows a woman in a really early period in Maya history, a period when the city states were being founded and dynasties were being instituted."
Close to Tikal
Archaeologists found the stela, which normally describe events in the lives of kings, at the site of Naachtun, a city 90 kilometres north of the more famous Mayan city of Tikal.
It was buried inside an ancient building, and some of the inscriptions had been hacked off, suggesting it had been a casualty in an invasion of the city, possibly by forces from Tikal at the end of the 5th century, she says.
"This was not unusual ... that they hack off or break stela. But one thing that was left on this stela was the name of the individual, and that is the name of a woman," Reese-Taylor says.
The name translates into Lady Partition Lord, she says.
An infant was buried with the stela, the researchers say.
Researchers do not suspect Mayan culture was matriarchal, but the newly unearthed stela shows that women played important roles in establishing the society, she says.
Next, the team will return to the site to make moulds of the monument and begin studying the imagery that accompanies the portrait, which includes a bird deity with serpentine wings.
"There's a lot of rich iconography that we need to interpret and that will give us clues of the position that she held, probably the political position of a founder of a dynasty. That would be my best guess right now," Reese-Taylor says. |
The Supreme Court
Supreme Court justices are appointed by the President of the United States and must be confirmed by a majority vote of the Senate.
Qualifications of Supreme Court JusticesThe Lower Federal Courts
The Constitution establishes no qualifications for Supreme Court justices. Instead, nomination is typically based on the nominee's legal experience and competence, ethics, and position in the political spectrum. In general, nominees share the political ideology of the presidents who appoint them.
Term of Office
Justices serve for life, baring retirement, resignation or impeachment.
Number of Justices
Since 1869, the Supreme Court has been comprised of 9 justices, including the Chief Justice of the United States. When established in 1789, the Supreme Court had only 6 justices. During periods of the Civil War, 10 justices served on the Supreme Court. For more history of the Supreme Court, see: A Brief History of the Supreme Court.
Chief Justice of the United States
Often incorrectly referred to as the "Chief Justice of the Supreme Court," the Chief Justice of the United States presides over the Supreme Court and serves as the head of the judicial branch of the federal government. The other 8 justices are officially referred to as "Associate Justices of the Supreme Court." Other duties of the Chief Justice include assigning the writing of the courts opinions by the associate justices and serving as the presiding judge in impeachment trials held by the Senate.
Jurisdiction of the Supreme Court
The Supreme Court exercises jurisdiction over cases involving:
- The U.S. Constitution, federal laws, treaties and maritime affairs
- Matters concerning U.S. ambassadors, ministers or consuls
- Cases in which the U.S. government or a state government is a party
- Disputes between states and cases otherwise involving interstate relations
- Federal cases and some state cases in which the lower court's decision is appealed
The very first bill considered by the U.S. Senate -- the Judiciary Act of 1789 -- divided the country into 12 judicial districts or "circuits." The federal court system is further divided into 94 eastern, central and southern "districts" geographically across the country. Within each district, one court of appeals, regional district courts and bankruptcy courts are established.
The lower federal courts include courts of appeals, district courts and bankruptcy courts. For more information on the lower federal courts, see: U.S. Federal Court System.
Judges of all federal courts are appointed for life by the president of the United States, with the approval of the Senate. Federal judges can be removed from office only through impeachment and conviction by Congress.
Other Quick Study Guides:
The Legislative Branch
The Legislative Process
The Executive Branch
Expanded coverage of these topics and more, including the concept and practice of federalism, the federal regulatory process and our nation's historic documents. |
Members Only Access The Founders Pass
You are missing out on crucial commentary video, audio and exclusive downloads!
See What You Are Missing Take The Tour!
OR Join Now
This Day In Founders History – 25 September
On this day in 1789, twelve amendments to the Constitution were proposed by the First Federal Congress of the United States and sent to the state legislatures for ratification. The first two amendments were not ratified, but numbers three through twelve became the first ten amendments to the Constitution, what we know as the Bill of Rights.
One notable birthday on this day in history in 1738, that of Nicholas Van Dyke. Van Dyke was a lawyer and politician from Delaware. He served as a delegate from Delaware to the Continental Congress and was a signer of the Articles of Confederation. Van Dyke also served as a state representative in Delaware, holding the Speaker position for one year, and he was the seventh President of Delaware. During his tenure as President of Delaware, he successfully carried out a plan to pay Delaware’s portion of the war debt. |
Use Fourth Grade Math Made Easy worksheets to give students practice with money and arithmetic. To start, pick a printable worksheet from below.
These workbooks have been compiled and tested by a team of math experts to increase your child's confidence, enjoyment, and success at school. Fourth Grade Math Made Easy provides practice at all the major topics for Grade 4 with emphasis on multiplication and division of larger numbers. It includes a review of Grade 3 topics and a preview of topics in Grade 5. It also includes Times Tables practice. Learn how the workbook correlates to the Common Core State Standards for mathematics. |
1. How can teachers promote positive cross-cultural attitudes and behaviors among students?
Teachers can promote positive cross-cultural attitudes by making sure that students have positive experiences with one another and interact successfully, especially in the partner language. When teachers explore student understandings of events and experiences, they ensure that learners are accurately interpreting what is going on around them. Over time, they help deepen students’ appreciation of the other culture and its speakers, and expand their understanding. Teachers should concentrate as much, if not more, on values, norms, and perspectives of the partner language culture (as well as those of other cultures, particularly if they are represented in the classroom) as they do on visible cultural practices, such as holidays, foods, music, and dance.
Teachers can also inform second language learners of the expected behaviors and norms to follow in given environments so that they behave in culturally expected ways and receive positive feedback during those experiences. Becoming bicultural is as important as becoming bilingual, and it has to be actively fostered; it doesn’t happen on its own. By having cross-cultural objectives in each lesson and unit, teachers ensure that they are paying adequate attention to this important goal of the program.
Children’s literature is another avenue for exploring cultural meanings and perspectives. Teachers help students understand each others’ lives when they choose materials that represent diverse perspectives and experiences and encourage students to discuss differences, looking at not just the story’s surface features—its events, setting, and characters—but also its deeper values. For example, a teacher in School District 54 in Schaumburg, IL, recounted an episode in her class that occurred while students were reading a short story. One student questioned why the father in the story needed his son to make calls for him. She couldn’t understand why an adult would ask a child to do this. The teacher then asked other students in the class to raise their hands if they had ever made calls for their parents and to explain why. Many students shared accounts of translating calls for their family members. Using this kind of literature validates the experiences of some students while it opens the eyes of other students to the lives of their classmates.
Other suggestions for promoting positive cross-cultural attitudes and behaviors follow:
- Be a good role model. Show appreciation and respect for people of differing cultural backgrounds.
- Celebrate linguistic diversity. Celebrate it within as well as across languages: Point out regional variations in vocabulary and other language features such as pronunciation.
- Invite cultural informants to come to the classroom so that students can see firsthand how members of a cultural group view certain events and experiences.
- Promote cross-cultural understanding among school staff by having open discussions about cultural issues at faculty meetings.
- Collaborate with the PTA in planning a multicultural evening where parents can exchange ideas, opinions, and food. |
About 50,000 years ago, a pile of volcanic rubble buried a conifer forest in the southern Lake District of Chile. Only an earthquake in 1960 brought the almost fossilized trees back into the light. Now, some 40 years later, researchers have studied the rings of the ancient trunks and have read from them details about Earth's climate during the Late Pleistocene when the trees were alive.
In fact, the trees belong to the species Fitzroya cupressoides, which are good climate indicators: their annual rings respond to variations in summer temperature. The scientists from Chile and other countries measured the rings of 47 cross sections from 28 trees and constructed a timescale spanning 1,229 years¿the oldest tree-ring chronology to date. Their analysis of ring width, published in today's Nature, revealed a number of long- and short-term climate cycles with different periods. Some of the longer cycles are probably a result of varying solar activity. It remains unclear, however, whether the shorter ones¿on a timescale of two to seven years¿are a result of the El Nino Southern Oscillation, which largely determines short-term climate variability today.
When the researchers compared the data from the ancient trees with measurements from modern, 1,000-year-old ones, they found very similar growth cycles. Thus, factors shaping the climate during the relatively warm period of the Late Pleistocene are probably doing much the same today. |
Teaching With Documents:
Letters, Telegrams, and Photographs Illustrating
Factors that Affected the Civil War
Prior to and during the Civil War, the North and South differed greatly in the resources that they could use. Documents held by the National Archives can aid in the understanding of the factors that influenced the eventual outcome of the War Between the States.
After the election of Abraham Lincoln to the presidency in 1860, the states of the southern United States broke away from the federal union that had existed since the ratification of the Constitution. Believing that Lincoln would restrict their rights to own slaves, Southerners decided that secession was a better choice than to give up their economic system and their way of life. President Lincoln and the North opposed the South's withdrawal; the president steadfastly maintained throughout the war that the secession was illegal and that the newly formed Confederate States of America was not valid as a new nation to the world. Despite Lincoln's hopes that the secession would end without conflict, the two regions fought a war that exploited the advantages and opportunities that each held over the other before their differences could be resolved.
The North held many advantages over the South during the Civil War. Its population was several times that of the South, a potential source for military enlistees and civilian manpower. The South lacked the substantial number of factories and industries of the North that produced needed war materials. The North had a better transportation network, mainly highways, canals, and railroads, which could be easily used to resupply military forces in the field. At sea, the Union navy was more capable and dominant, while the army was better trained and better supplied. The rest of the world also recognized the United States as a legitimate government, allowing U.S. diplomats to obtain loans and other trade concessions.
The South had fewer advantages, but it held several that would pose great threats to attempts by their Northern neighbors to end the rebellion. The South was able to fight on its home terrain, and it could win the war simply by continuing to exist after the hostilities ended later. The South also had a military tradition that encouraged young men to serve in the armed forces or attend a military school; many had served the U.S. military prior to the Civil War, only to resign and fight for their states and family. In addition, the South had the leadership of great commanders, including Robert E. Lee, Joseph Johnston, and "Stonewall" Jackson.
As disadvantages, the South had to worry about its slave population, which posed the threat of rebellion and assistance to the Northern cause. Actions by the North to promote this fear included the Emancipation Proclamation, which ended slavery in all territories held by Union troops, but not in all areas of the North, such as loyal, but slave-owning, states along the borders of the two powers. Had the North tried to free slaves in these areas, more aid would have been generated for the South, and slave-owning Maryland's secession would leave the U.S. capital in Confederate hands. In addition, the North suffered because a series of senior generals did not successfully exploit the weaknesses of the South, nor did they act upon the suggestions of their commander-in-chief. President Lincoln finally got his desired general in Ulysses S. Grant, who had solidified the Union's control of the West in parts of the Mississippi River Basin. Grant directed the defeat of Southern forces and strongholds and held off determined advances northward by the Confederates on several occasions before the surrender by Lee to Grant took place in 1865.
To defeat the South, the North had to achieve several goals. First, control of the Mississippi River had to be secured to allow unimpeded movement of needed Western goods. Second, the South had to be cut off from international traders and smugglers that could aid the Southern war effort. Third, the Confederate army had to be incapacitated to prevent further northward attacks such as that at Gettysburg, Pennsylvania, and to ease the battle losses of the North. Fourth, the South's ability to produce needed goods and war materials had to be curtailed. It was these measures that the South had to counter with their own plans to capitalize on early victories that weakened the Northern resolve to fight, to attain international recognition as a sovereign state, and to keep Union forces from seizing Confederate territory.
The South ultimately did not achieve its goals, and after four years of fighting the North won the war. The devisive, destructive conflict cast a shadow on the successes of the United States during the 19th century, however. The country had to find ways to heal the wounds of war during Reconstruction.
Davis, Burke. Sherman's March. New York: Vintage Books, 1991.
Garrison, Webb. Civil War Curiosities. Nashville: Rutledge Hill Press, 1994.
Wiley, Bell Irvin. The Life of Billy Yank. Baton Rouge: Louisiana State University Press, 1991.
Wiley, Bell Irvin. The Life of Johnny Reb. Baton Rouge: Louisiana State University Press, 1993.
- Letter from Robert E. Lee to Simon Cameron,
Secretary of War, in which Lee resigned from the U.S. Army.
ARC Identifier: 300383
- Message of President Abraham Lincoln
nominating Ulysses S. Grant to be Lieutenant General of the Army.
ARC Identifier: 306310
- Telegram from General William T. Sherman
to President Abraham Lincoln announcing the surrender of Savannah, Georgia, as
a Christmas present to the President.
ARC Identifier: 301637
- Telegram from Abraham Lincoln to Lieutenant
General Ulysses Grant at City Point, Virginia.
ARC Identifier: 301640
- Emancipation Proclamation
ARC Identifier: 299998
View Pages: 1 | 2 | 3 | 4 | 5
- Photograph of the first ironclad gunboat
built in America, the Saint Louis , ca.1862.
ARC Identifier: 533123
- Sound recording of an interview with John Salling, last surviving Confederate
Audio: On meeting famous generals. Explains he was a saltpeter digger. (159K, 0:20)
Audio: Discusses war career. Includes a long pause while he tries to remember the name of a commanding officer. (413K, 0:53)
Audio: On singing. Relates how he sang for General Bush. (336K, 0:43)
Audio: Sings the song that he sang for General Bush. (304K, 0:39)
Audio: Sings "Hang Jeff Davis from the Sour Apple Tree." (239K, 0:31)
Audio: Explains he was drafted, not enlisted. (214K, 0:27)
Audio: Sings a verse of "Yellow Rose of Texas." Interviewer recites another version. (438K, 0:56)
Audio: On seeing Teddy Roosevelt speak at Gettysburg. (398K, 0:51)
Audio: Describes meeting soldiers at Gettysburg reunion. (Part 1). (399K, 0:51)
Audio: Describes meeting soldiers at Gettysburg reunion. (Part 2). Recalls how Union soldier talked with Confederate soldier. (444K, 0:57)
Audio: Describes meeting soldiers at Gettysburg reunion. (Part 3). Recalls how Union soldier talked with Confederate soldier. (610K, 1:18)
National Archives and Records Administration
Record Group 94 - Records of the Office of the Adjutant General
Record Group 46 - Records of the U.S. Senate
Record Group 107 - Records of the Office of the Military Telegraph
Record Group 11 - General Records of the U.S. Government
Record Group 165 - Records of the War Department Library
Record Group 200 - National Archives Gift Collection
This article was written by David Traill, a teacher at South Fork High School, in Stuart, FL. |
Property taxes are the primary source of revenue for local governments in the United States. The money collected from property taxes provides funds for fire protection, law enforcement, public education, road construction, and other public services. History shows that taxes were first levied upon real estate in ancient civilizations, including Egypt, Persia and China. In medieval times, property taxes were assessed based on the size of the real estate parcel owned. This was later modified so that taxes were assessed based on the income producing capacity of the property (including structures, agricultural equipment, and livestock). Today, property tax assessment procedures vary by state, county, and city, as well as zoning within localities. While each local government has its own procedure for assessing and taxing real estate, the general formula for property tax is as follows: Annual Budget – Sales Tax Revenue / State Aid = Property Tax In the past, property tax rates have been relatively stable, with only mild fluctuations. As property appreciates in value, local taxing authorities (using the same tax rate) are able to collect more revenue based on higher assessed values. However, with the marked decline in property values and the need for local revenue (to maintain adequate public services), property tax rates are on the rise. Additionally important to remember is that the assessed value and the appraisal value of your property are determined by different entities and are rarely in concurrence. In assessing the value of real estate for property tax purposes, there are 3 standard approaches that are employed. These include the following:
- Cost Approach ― The estimated value of the land, without improvements, is determined. The replacement or reproduction value of improvements to the property (e.g., home, pool, or patio), less the accrued depreciation of said improvements, is then added to determine an assessed value.
- Sales Comparison Approach ― The sales price of comparable homes in the area are averaged to determine the assessed value of the property.
- Income Approach ― For income-producing properties (such as a lease or rental property), a mathematical process called “capitalization” is used. The estimated income of the particular property is a variable in the formula, enabling the calculation of the assessed value for property tax purposes.
The simplest approach tends to be the “sales comparison approach,” however; it is not necessarily the most accurate. The “cost approach” is much more involved and takes into account a myriad of factors because specific improvements are identified in enough detail to determine reproduction costs ― these can include materials to construct the foundation, construction of structural and finish walls, roofing, heat and/or air conditioning, types of appliances, septic system, whether water is supplied by well or public utilities, the condition and age of the improvements, and more. How often property is reassessed varies from state to state. Remember, property taxes are not based on the purchase price of a home, but on the assessed value of the property. |
Engineering a Heart Valve
The scientists at the Wake Forest Institute for Regenerative Medicine are investigating the possibility of engineering heart valves in the laboratory that will be perfect matches for patients needing valve replacement surgery.
This process begins with a pig valve, which, today, is commonly used to replace human heart valves. While these valves function quite well, they are not always long-lasting. Our goal is to remove all cells from the valve, and replace them with a patient's own cells. Removal of the original cells leaves the basic structure, or skeleton of the valve, which consist mainly of collagen and elastin.
We are exploring two options for placing a patient's own cells onto the scaffold. One involves obtained cells from a blood sample and multiplying them in the lab. Once there are enough cells to coat the scaffold, it would be placed in a heart valve "bioreactor," equipment that pre-conditions or "exercises" it so it would develop the properties it needs to function in the body. The bioreactor has computer-controlled fluid flow, mimicking the natural function in the human heart valve. A second strategy is to coat the scaffold with an antibody that attracts certain cell types. The scaffold would then be implanted in the body, where it would theoretically "self-seed" with a patient's cells.
Watch a video of a valve being pre-conditioned in a bioreactor: |
Submitted by: Bob Leslie
(Taken from the book Write Now! - by Anne Wescott Dodd – book is out of print)
Unlike expressed comparisons, implied or indirect comparisons are not introduced by the words like or as. Implied comparisons can be made by connecting two unlike objects by their common quality. Such statements are meant comparatively, not literally. For example, “John is a clown” does not mean that John dresses in baggy pants and has his face painted. Rather, John’s actions draw the laughter of other people, so he brings to mind a circus clown. Many implied comparisons are tired and worn-out through overuse. If they are not trite, however, they can be very effective.
Write an implied comparison for each of the following.
Example: Old age is a summer evening.
1. a building
3. an elephant
5. a snowflake
6. a traffic light
7. a tree
8. a graveyard
9. a subway tunnel
10. a bicycle
An implied or indirect comparison is usually called a metaphor. Use metaphors to create poems and prose that are fresh and alive. At this point you may wish to look back at the metaphors you wrote in 'Abstract and Concrete'. Can you improve any of these metaphors now? |
Population density is the measure of the population number per unit area, according to About.com. An example would be people per square mile, which is calculated by dividing the total number of people by the land area in square miles.Continue Reading
Population density is often used to estimate the number of people in any given area of a country. Examples include the population density of France, which is France's population divided by the square number of kilometers, which is approximately 109.8 people per square kilometer.
The Earth's population density is equal to its population divided by total square miles. The population is equal to 7 billion, and the square miles are 197 million including land and water. These two figures can be divided to come up with the population density of 35 people per square mile.
City-states, microstates and dependencies tend to have a much higher population density because the areas are very small but are inhabited by a large number of people. Some of these areas are considered to be overpopulated, though this depends on factors such as housing quality and access to resources. The most dense of populations are generally located in countries in eastern Asia and northern Africa.Learn more about Population & Demography |
News: News Archives
Science: Researchers Decode the Maize Genome
An example of maize diversity. The ears of maize show natural variation in their levels of carotenoids, an organic pigment.
[Image © Science/AAAS]
Scientists have sequenced the extremely complex genome of the maize plant, one of our oldest and most important crops. This achievement should lead to new insights into plant genetics as well as major progress in breeding crops that are more environmentally sustainable or better-suited for certain climates.
The results appear in a package of articles in the 20 November issue of Science. A set of companion papers also appears in the journal PLoS Genetics this week.
In the primary Science article, a large research team called the Maize Genome Sequencing Consortium describes the genome sequences of the B73 inbred maize line. B73 is used frequently to breed new lines of feed corn, and its genome sequence should provide genetic markers that could help plant breeders or seed companies develop carefully tailored crops. These crops could have higher nutritional content, for example, or might require less fertilizer. Or, they may better withstand the environmental stresses associated with global climate change.
The genome sequence should also help biologists answer a number of long-standing questions in plant genetics, including the impact of mobile DNA sequences called transposable elements and how the modern maize genome evolved after two ancestral genomes fused together.
The B73 sequence “promises to advance basic research and to facilitate efforts to meet the world's growing needs for food, feed, energy and industrial feed stock in an era of global climate change,” the researchers write.
Other findings reported in this Science issue include:
- Popcorn Reveals Clues to Domestication: The domestication of the teosinte plant into maize may have involved genes that help the plants tolerate metal in the soil, researchers concluded after comparing the B73 genome with that of a type of popcorn grown in the Mexican highlands.
- Genetic Diversity Across Maize Lines: By analyzing 27 different maize lines, another team has developed a maize haplotype map, or “HapMap,” showing common patterns of genetic variation. This map should be a particularly useful tool for teasing apart the genetic basis for agronomically important traits, such as yield, quality and stress tolerance.
- The Male Influence: Gene copies inherited from the male parent control the expression of thousands of other maize genes, a phenomenon known as imprinting. Although the precise connection is not yet clear, according to one of the companion studies, the paternal dominance observed in maize help explain why hybrid maize lines are agriculturally superior to their inbred parents.
19 November 2009 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.