text
stringlengths
255
17.6k
The ATSR Project Antarctic: B-15 Iceberg - One of the largest icebergs ever seen. ATSR-2 has been used to track a large iceberg which broke away or calved from the Ross Ice Shelf in the Antarctic. The iceberg, at nearly 300km in length and 40km in width, was one of the largest ever seen at the time of its formation in March 2000. Since then, it has broken into two main icebergs and other smaller ones have formed including B-17, probably caused by B-15 crashing into the ice-shelf. This sequence of images shows how the various pieces have developed over the last few months. Calving of icebergs from the Antarctic ice-shelf is common. As the snow accumulates it turns into ice and a shelf of ice is pushed out into the surrounding ocean. Eventually, pieces break off the ice-shelf and an iceberg is born. The formation of icebergs in the Antarctic in this manner is completely different from that in the Arctic which leads to icebergs at either end of the world being different shapes. Antarctic icebergs tend to be relatively low and flat but can be very large whilst arctic icebergs are much smaller. Over 90% of the ice locked in icebergs is to be found in Antarctica. It is unlikely that this particular event is connected to global warming as the advance of the icesheet is a continual process. Even with the calving of this monster, the edge of the ice-shelf has simply returned to where it was about 50 years ago. There is much interest in what will happen to B-15 and associated icebergs. Most scientists expect it to drift north-west then west staying close to the Antarctic coast. Another scenario is that it will drift north and then east passing through the Drake Passage between Antarctica and South America and into the South Atlantic Timeline/Diary of main events 20-March-2000: first image of the B-15 Iceberg, three days after the initial report of the calving 13-April-2000: new iceberg to the East - B-17 09-May-2000: Press Release from RAL: B-15 Iceberg. 23-May-2000: B-15 iceberg broken in two - B-15A and B-15B 17-Sept-2000: Cloud Streets 29-Sept-2000: new iceberg forms - B-20, with the B-15A and B-15B continuing to drift apart 24-Oct-2000: B-20 renamed C-16, since it broke away from the Ross Ice Shelf in the 'C' sector of the Atlantic 07-Dec-2000: It appears that perhaps a new iceberg, B-15F, has broken from the B-15B iceberg. Many thanks to ESA's NRT system for the help that has been provided in tracking the Ross Iceberg Page last modified : Thursday, 06-Dec-07
Brownian motion is the random motion of particles in a liquid or a gas. The motion is caused by fast-moving atoms or molecules that hit the particles. Brownian Motion was discovered in 1827 by the botanist Robert Brown. In 1827, while looking through a microscope at particles trapped in cavities inside pollen grains in water, he noted that the particles moved through the water; but he was not able to find out what was causing this motion. Atoms and molecules had long been theorised as the main parts of matter. Albert Einstein published a paper in 1905 that explained in precise detail how the motion that Brown had observed was a result of the pollen being moved by individual water molecules. This was one of his first big contributions to science, and convinced many scientists that atoms and molecules exist, It was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. There are too many molecular impacts making the Brownian pattern, so no scientific model can account for all of them. That is why only probabilistic models of molecular populations can be used to describe it. Two such models of the statistical mechanics, made by Einstein and Smoluchowski are presented below. Another, pure probabilistic kind of models are stochastic process models. There exist both simpler and more complicated stochastic processes which in extreme ("taken to the limit") may describe the Brownian Motion (see random walk and Donsker's theorem). History[change | change source] The Roman Lucretius's scientific poem "On the Nature of Things" (c. 60 BC) has a description of Brownian motion of dust particles in verses 113–140 from Book II. He uses this to help people know for sure of the existence of atoms: "Observe what happens when sunlight is let into a building and small building light on its shadowy places. You will see an amount of tiny particles moving in an amount of ways..." While Jan Ingenhousz described the strange motion of coal dust particles on the top of alcohol in 1785, the discovery of this is often given to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was not known yet. The first person to describe the mathematics behind Brownian motion was Thorvald N. Thiele in a paper on the method of least squares published in 1880. This was followed by Louis Bachelier in 1900 in his PhD thesis "The Theory of Speculation", in which he presented an analysis of the stock and option markets. The Brownian motion model of the stock market is often used, but Benoit Mandelbrot denied its applicability to stock price movements. Albert Einstein (in one of his 1905 papers) and Marian Smoluchowski (1906) brought the solution of the problem to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules. Their equations describing Brownian motion were checked by the experimental work of Jean Baptiste Perrin in 1908.
Without the writings of Edward Huggins, we would know much less about life at Fort Nisqually, a Hudson’s Bay post in Pierce County in the mid-19th century. He knew this history because he had lived it. Huggins was born to Edward and Ellen Chipp Huggins on June 10, 1832, in London’s Southwark borough. He grew up in poverty, attending the Queen Elizabeth Grammar School in London. While still a young teen, Huggins took a job at a ship broker’s office. With few prospects, he signed up with the Hudson’s Bay Company (HBC), a British fur-trading company that operated in North America. He sailed from Britain in 1849, at the age of 17, never to return home. The young Englishman arrived at Fort Victoria, an HBC post on Vancouver Island. Shortly thereafter, he was transferred to Fort Nisqually on the Puget Sound, a trading post located at what is now DuPont. Huggins worked as a clerk and bookkeeper at the Fort and also managed the trade shop. Eventually he became regarded as post commander Dr. William F. Tolmie’s righthand man. Huggins worked at Fort Nisqually during a time of great change. According to the Treaty of Oregon in 1846, the Pacific Northwest became part of the United States. As part of the settlement, the HBC agreed to sell its property and claims below the Canadian border to the federal government. These negotiations took many years. While the HBC reflected the racism of its time, they were not interested in settling the land, but worked as partners with Native Americans, needing them as laborers and to collects furs. Many HBC workers married Native Americans. American settlers, seeking land, usually saw things differently. Washington Territorial Governor Isaac Stevens negotiated a series of treaties with Native Americans for the possession of the land. Dissatisfaction with these agreements led to the Treaty Wars from 1855 to 1856. The HBC tried to stay out of the conflict. The war brought opportunity for Huggins to show his leadership skills. The head of Muck Station fled his post and Huggins volunteered to replace him. Muck Station was part of a series of farming outstations operated by the Puget Sound Agricultural Company, a HBC subsidiary. At Muck, Huggins oversaw other outstations between the Puyallup and Nisqually Rivers, a task that required him to ride between the stations each week alone, which he did faithfully without incident. After the war, Huggins returned to Fort Nisqually. On October 21, 1857, he married Letitia Work, the daughter of a leading HBC official. She had been a frequent visitor to the Fort to see her sister Jane, the wife of Dr. Tolmie. The Hugginses had a happy marriage and had many children: William, Thomas, Edward, John, Helen, David, Henry and Joseph. After a friend died, they adopted Letitia Williamson. In 1859, Tolmie was transferred to Fort Victoria and Huggins became the commander of Fort Nisqually. As leader of the post, he faced the difficult task of dealing with increasing American pressure as settlers squatted on company land and stole livestock. In 1869, the HBC finally sold their rights south of the Canadian border. Fort Nisqually closed in 1870. Because of his long service, the HBC offered Huggins a post at Fort Kamloops in the rugged interior of British Columbia. Huggins preferred to stay, retiring after 20 years of service with the Company and becoming an American citizen. He filed a claim for the Fort Nisqually site. Keeping his home in the “factor’s house,” the Huggins’ farm eventually grew to 1,000 acres. Huggins and his family did not live in isolation. He became active in local politics and was on the local school board. In the 1870s and 1880s, he served three terms on the Pierce County Board of Commissioners. From 1886 to 1890, he was Pierce County Auditor. During this time, he lived in Tacoma while his sons operated their farm. In 1897, Edward and Letitia retired back to “the ranch.” However, keeping up the farm proved to be too much in their declining health and they reluctantly decided to sell the land to the DuPont Company to build an explosives factory (the origins of today’s city of DuPont) and move back to Tacoma. Edward died January 24, 1907, before their new house had been completed. Letitia passed away on September 12, 1910. Edward Huggins lived an interesting life and left an inestimable legacy as a historian. He composed essays about Fort Nisqually life for newspapers and wrote many letters. In these writings he tells more of daily life than what appears in official Company records, including that workers at the Fort liked to gather to watch the sunset. Huggins also preserved the factor’s house and granary from the Fort. These buildings were moved to Tacoma in the 1930s to form (along with reconstructed structures) the Fort Nisqually Living History Museum. This Museum features living history interpretation of the HBC era. The original 1843 Fort Nisqually site, owned by the Archeological Conservancy, is located in DuPont and is sometimes open for tours. The DuPont Historical Museum organizes these tours and also has displays on Fort Nisqually and other DuPont history.
DK Science: Road Vehicles Vehicles use engines of different kinds to move people and cargo from place to place. Most cars and motorbikes have petrol engines, but vans and trucks use larger diesel engines. A diesel engine produces more power than a petrol engine by compressing the air and fuel much more. Petrol and diesel engines produce large amounts of pollution. ELECTRIC CARS are less polluting. Trucks have big diesel engines that produce more power than a car engine, but they also use more fuel and produce more pollution. A truck is heavier and moves with more momentum (force) than a car travelling at the same speed. This is why a truck needs much more powerful brakes than a car and takes a longer distance to come to a stop. Electric cars use batteries or fuel cells instead of engines and petrol. Batteries have to be charged up every so often, from the mains or from an engine, and the car then runs until the batteries are flat. Fuel cells work in a different way. Like an engine, a fuel cell takes in a steady supply of fuel, usually hydrogen gas. Like a battery, it produces a constant stream of electricity that powers an electric motor. The world’s fastest solar-powered car, Nuna II, has a top speed of 160 kph (100 mph). It is built in plastic and covered in solar panels. These convert the Sun’s energy into electricity and store it in batteries, so the car can also drive in the shade. The body, solar panels, and batteries were originally developed for spacecraft. Petrol engines are good for driving at constant, higher speeds on open roads. Electric motors are good for stop-start driving in city centres. They have lower top speeds than petrol engines. Hybrid cars have both a petrol engine and an electric motor. The car automatically switches between the two to suit varying traffic conditions.
The term natural disaster usually refers to a catastrophic event resulting from a natural process, such as a storm or a volcanic eruption. Natural disasters can severely impact human society, causing extensive fatalities and injuries. Destruction of homes and businesses bring both a personal and an economic toll. In a given year there may be several hundred large-scale disasters worldwide, causing thousands of human deaths and affecting millions of people overall. The likelihood of some types of disasters can be forecast using modern technology to monitor weather and related conditions. However, the precise location and onset of most disasters cannot be predicted. Some natural disasters may result from long-term changes in environmental conditions. For example, many scientists associate global warming with extreme weather conditions; they predict an increase in prolonged droughts and severe weather events such as hurricanes and large-scale flooding. In addition to their effects on human life, natural disasters can severely impact ecosystems, causing drastic changes to soil, space, and water, and thus affecting all living things that depend on these resources. Landslides and flooding can drastically change environmental conditions, leading to increased rates of erosion and causing other dramatic changes to land and water. Earthquakes, volcanic eruptions, and wildfires can literally change landscapes, causing long-term changes to habitats with cascading effects on wildlife. Species that cannot adapt to sudden changes may need to migrate to other areas or face extinction. In some cases, natural areas affected by a disaster can rebound in time; the natural process of succession occurs in areas that have been rendered effectively barren by lava flows and similar events. Although natural disasters cannot be eliminated, in many cases there are steps that can be taken to lessen their impact. Some natural hazards are preceded by conditions that can be used to predict an imminent event. For example, improvements in storm detection and tracking allow for prediction of impending storms and hurricanes. With enough advanced warning, people can prepare for these types of events by stocking up on supplies, securing windows, taking shelter, or even evacuating the area. Most communities offer guidelines on preparing for disasters that are likely to occur in their local or regional areas. However, some events, such as earthquakes, cannot be predicted reliably, though steps can be taken to minimize their impact should they occur. To learn more about specific types of natural disasters, see avalanche; drought; earthquake; flood; hurricane; landslide; storm; tornado; tropical cyclone; tsunami; typhoon; and volcano. The following articles provide information about several historic natural disasters: Galveston hurricane of 1900; Huang He floods; Hurricane Katrina; Indian Ocean tsunami of 2004; Japan earthquake and tsunami of 2011; Mississippi River flood of 1927; Pakistan Floods of 2010; Super Outbreak of 2011; Superstorm Sandy; Super Typhoon Haiyan; and Tri-State Tornado of 1925.
Lesson Plans A3: Writing numbers from 0 to 20 Lesson Seeds A1: Last man standing Lesson Seeds A1: Nearby teens game Lesson Seeds A2: Pick a number Lesson Seeds A2: Put them in order Download Seeds, Plans, and Resources (zip) Content Emphasis By Clusters in Grade PK Progressions from Common Core State Standards in Mathematics Send Feedback to MSDE’s Mathematics Team Lesson seeds are ideas that can be used to build a lesson aligned to the CCSS. Lesson seeds are not meant to be all-inclusive, nor are they substitutes for instruction. When developing lessons from these seeds, teachers must consider the needs of all learners. It is also important to build checkpoints into the lessons where appropriate formative assessment will inform a teachers instructional pacing and delivery.. This unit extends the work that was done in Prekindergarten with numbers up to 10. Students in Kindergarten are expected to rote count to 100 by ones and by tens. Emphasis in this unit is placed on the counting sequence. Students should also demonstrate the ability to count forward beginning from a given number writing the known sequence (instead of having to begin at 1), which is a prerequisite for counting on. In addition, students in Kindergarten will be expected to write numbers from 0-20 and to represent a number of objects with a written numeral 0-20, with 0 representing a count of no objects. Students will progress from saying the counting words to counting out objects and comparing numbers. This unit builds the foundation for students’ ability to count to find how many, and to model addition and subtraction with small sets of objects. It is the expectation that this unit will precede K.CC.4-7. Students should be provided multiple opportunities to connect number words and numerals to the quantities they represent, using various physical models and representations, games, and hands-on activities. It is important to note that counting should not be taught in isolation and should be reinforced daily throughout the school year. Focus Standards (Listed as Examples of Opportunities for In-Depth Focus in the PARCC Content Framework documents for Grades 3-8): Possible Student Outcomes: The student will: Evidence of Student Learning: Fluency Expectations and Examples of Culminating Standards: Interdisciplinary connections fall into a number of related categories: Sample Assessment Items: The items included in this component will be aligned to the standards in the unit and will include: rote counting: reciting numbers in order from memory without aligning them to objects, pictures, etc. verbal counting: counting while aligning each number said to an object, picture, etc. in order to solve a problem. cardinality: : is the understanding that when counting a set, the last number represents the total number of objects in the set. Examples: This is a set of 3 stars Part II – Instructional Connections outside the Focus Cluster subitizing : the ability to recognize the total number of objects or shapes in a set without counting.
Class Central Tips Learning statistics can be a daunting experience. There is a plethora of statistical concepts to master and many of them come with a hefty dose of mathematical notation. The goal of the present course is to develop a clear path through the conceptual forest and to explain each concept both in its narrow meaning and as a part of the larger enterprise of statistical reasoning. Mathematical skills are not taken for granted; instead, we shall review the necessary mathematical tools so that you will not get stuck on this aspect. The field of statistics is sometimes divided into descriptive and inferential statistics, with probability theory forming a bridge between the two. In this course, we start out descriptively, by considering different ways in which we can learn from data. We then delve into the subject of probability theory, to end with a discussion of statistical inference. The emphasis in this part is on learning how to draw conclusions about populations with the help of data from a sample. By the end of this course, you should have a good feeling for descriptive statistics, statistical inference, and probability theory. You should also understand the interplay of these elements in the broader enterprise of statistical reasoning. And you should feel more comfortable reading about statistics and using them in your own work.
This is a worksheet to practice story tenses: past simple, past perfect and past continuous. It suits intermediate level. It includes two tasks. The first task includes 5 sentences to be completed by the correct form of the infinitive verb given between brackets. the second is a brief story with incompleted sentences. Students have to complete with a past simple, perfect or continuous verb.
The United States experienced major waves of immigration during the colonial era, the first part of the 19th century and from the 1880s to 1920. Many immigrants came to America seeking greater economic opportunity, while some, such as the Pilgrims in the early 1600s, arrived in search of religious freedom. Why did immigrants come to the United States in the late 1800s and early 1900s? In the late 1800s, people in many parts of the world decided to leave their homes and immigrate to the United States. Fleeing crop failure, land and job shortages, rising taxes, and famine, many came to the U. S. because it was perceived as the land of economic opportunity. Why did we have so many people coming to America between 1870 1920? Immigrants during this period were motivated to immigrate due to shortages of land, cheaper transportation, and the hope of making money to send home. Between 1870 and 1920 some 11 million immigrants came to the United States. In Akron, the new ethnic groups began to make their presence known. How did immigration to the United States change between 1865 and 1920? After the civil war, foreign immigration into the United States steadily began to climb. Innovations such as steam power, expansive railroads and the possibility of wealth and prosperity due to high demand of laborers caused immigration to skyrocket across the United States. Where did immigrants come from in the period from 1870 to 1920? Between 1870 and 1900, the largest number of immigrants continued to come from northern and western Europe including Great Britain, Ireland, and Scandinavia. But “new” immigrants from southern and eastern Europe were becoming one of the most important forces in American life. What was immigration like in the 1900s? Immigration in the Early 1900s. After the depression of the 1890s, immigration jumped from a low of 3.5 million in that decade to a high of 9 million in the first decade of the new century. Immigrants from Northern and Western Europe continued coming as they had for three centuries, but in decreasing numbers. How did immigration affect America in the 20th century? The researchers believe the late 19th and early 20th century immigrants stimulated growth because they were complementary to the needs of local economies at that time. Low-skilled newcomers were supplied labor for industrialization, and higher-skilled arrivals helped spur innovations in agriculture and manufacturing. Who are old immigrants? The so-called “old immigration” described the group European immigrants who “came mainly from Northern and Central Europe (Germany and England) in early 1800 particularly between 1820 and 1890 they were mostly protestant” and they came in groups of families they were highly skilled, older in age, and had moderate … Who was the largest immigrant group in 2019? In fiscal 2019, a total of 30,000 refugees were resettled in the U.S. The largest origin group of refugees was the Democratic Republic of the Congo, followed by Burma (Myanmar), Ukraine, Eritrea and Afghanistan. Among all refugees admitted in fiscal year 2019, 4,900 are Muslims (16%) and 23,800 are Christians (79%). Where did most Irish immigrants settle between 1820 and 1850? The correct answer is cities on the East Coast. Most immigrant Irish settled in the East Coast between 1820 and 1850. How did immigrants decide to settle where they did? Immigrants choose to live where they do because of the economic, social and cultural factors of their lives. Other destination countries also witness a similar desire on the part of their immigrants to concentrate. How did immigration change after 1865? How did immigration change after 1865? New groups of immigrants came from southern and eastern Europe, as well as from Mexico, China, and Japan. What were the working conditions in sweatshops? Sweatshops were dark and crowded, work was repetitious and hazardous, and the pay was low. What helped immigrants in the 1800s and early 1900s maintain their cultures? Living in enclaves helped immigrants of 1800 maintain their culture. These immigrants of 1800 and early 1900 moved to United States, leaving their native places. … Majority of these immigrants were from Northern Europe and Western Europe, Ireland, Scandinavia and Britain. Where did most immigrants come from in the 1920s? Between 1880 and 1920, more than 20 million immigrants arrive. The majority are from Southern, Eastern and Central Europe, including 4 million Italians and 2 million Jews. Many of them settle in major U.S. cities and work in factories. What law requires immigrants to read and write? The Immigration Act of 1917. Which region did the fewest number of immigrants come from between 1870 and 1910? Northern and Western Europe. Which region did the fewest number of immigrants come from between 1870 and 1910? a. Northern and Western Europe.
These comprehensive literature study unit studies are available in either book or ebook formats. Both are reproducible for classroom or family use. Most units cover one novel, although there are a few that address a series by a single author or two or more books on a single theme. Titles all begin with "A Guide for Using" and end with "in the Classroom"; the title of the novel(s) being studied is inserted between the two phrases. Teacher Created Resources offers units three levels: primary, intermediate, and challenging. 29 titles are available for primary level (grades 1 - 3); examples are Where the Wild Things Are, If You Give a Mouse a Cookie and If You Give a Moose a Muffin, and Strega Nona (an Italian story). Representative of the many titles available for intermediate grades (grades 3 – 5) are Charlotte's Web, Island of the Blue Dolphins, The Secret Garden, The Sign of the Beaver, and Charlie and the Chocolate Factory. The challenging level targets grades 5 – 8 with titles such as Where the Red Fern Grows, Julie of the Wolves, Old Yeller, Tuck Everlasting, Bridge to Terabithia, The Hobbit,Book of Greek Myths (by the D'Aulaires), and Harry Potter and the Sorcerer's Stone/Other Harry Potter Books . You will probably want to be selective about which novels you might want your children to read, so use discrimination in selecting titles from among the unit study books. In the units, novels are broken down into chapter groupings for study. Activities for each grouping range across the curriculum (whole language approach) including writing, vocabulary, geography, art, music, math, science, and social studies. Quizzes and answer keys are included within each unit. A few of the lesson plans show a multicultural emphasis (Indians, ecology), but some of these activities could easily be skipped. Some activities would work best in a small group, and others are oriented to the traditional classroom; however, at less than $10 per book, you still get plenty of material, even if you don't cover it all. You can download sample pages from most if not all these books from the website.
Bullying and Cyberbullying: What Can Schools Do? Bullying and Cyberbullying: What and How? Cyberbullying means using technology to bully others. Like regular bullying it may involve denigrating insults, harsh judgments, threats, and lies or misrepresentations meant to embarrass another. It can also involve posing as another person and sending negative information, "outing" others by sharing messages that were intended to be private, or "tricking" people to reveal personal information (Kowalski, Limber, & Agatston, 2008). Cyberbullying also may be communicated in varying technological modes. Using the Internet one may bully by e-mail, instant messaging, blogs, and social network sites (such as Facebook). One may use cell phones and I-Pads. Most schools have policies prohibiting cell phone use in school. However, students have indicated that, despite school policies, they very frequently send text messages while at school (Kowalski et al., 2008). Can schools get involved? Schools can and should have policies regarding Internet use and cell phone use on school grounds. They usually do. However, some people may think that the school should not have any concern about cyberbullying activity that takes place outside of school. However, what happens on the Internet can easily and often does influence the school experience of those who have been targeted online. When there is substantial disruption of the learning environment, school officials do have the right to take action (Willard, 2007). What can schools do? Schools have a very important part to play in preventing bullying, and in dealing with both face-to-face and cyberbullying. They should have a multi-faceted, comprehensive anti-bullying policy in place. - Develop a whole school anti-bullying policy. This involves planning and communication with staff, parents and students, so that all have information about what constitutes bullying and cyberbullying, have input into policies before initiation, and understand what should be done when they see an instance of bullying (Farrington & Ttofi, 2009). Communication with the community is particularly important regarding cyberbullying. Students minimize the harm electronic comments can cause, and adults may do the same. Both students and their parents may have a belief in absolute "free speech" and may devalue the seriousness of some cyberbullying behavior. A school-wide policy will provide overall assessment, and clear procedures for evaluating material directed at students, staff or school. All stakeholders should understand procedures for formal disciplinary action. Not every bullying event may have a school connection or rise to the level of "substantial and material disruption" of the learning environment. However, other action options should be available and clear. - Use the classroom to develop the rules and empathy. Successful programs described in the research literature often had teachers and students develop, at the classroom level, a list of unwanted behaviors that could be classified as bullying or cyberbullying and understood by the children. This procedure enlists the ideas of the students and allows for the explanation of why certain behaviors are wrong. Particularly for cyberbullying, discussion of the harm bullying can cause is vital. Kowalski and her colleagues suggest we should "use students as experts," as the sources of the latest sites and technologies (Kowalski et al. 2008). Teenagers may serve as mentors, providing information on Internet safety and cyberbullying to younger students. - Clear and comprehensive behavior codes. Cyberbullying events that occur on school property will probably clearly fall under the anti-bullying policy. However cyberbullying often is done away from school to students by students. Some schools have a student-signed behavior code that prohibits cyberbullying behaviors even if they occur outside of the school building. - Check for crossover effects. Since electronic bullying is often accompanied by bullying at school, administrators should carefully investigate in-school behaviors that are covered by school policies. If these behaviors correspond to cyberbullying that is taking place outside of school this helps the school to make the argument that the cyberbullying is affecting the learning environment. - Effective classroom management and disciplinary methods. Naturally, a well-run classroom provides less opportunity for bullying; there is greater attention to learning tasks and less "open" time. Classroom discipline may play a part in the other direction as well. Overly harsh discipline creates an atmosphere in which "Might equals right." The assumption is created that one should use status and power to get what one wants and children will imitate these negative models. If the child feels powerless and disrespected in the learning situation, bullying among students is also more likely to occur. On the other hand, some children will be so used to being dominated by adults they will show helplessness among their peers, and become targets of bullying. - Improved supervision. Bullying decreases with more supervision of children on playgrounds, in hallways, cafeterias and other school settings. This intuitive finding should encourage the use of volunteers and other adults in those places where bullying is likely to happen. What Doesn't Work: Not having any consequences or having inappropriate consequences or zero tolerance policies. Students will not inform either parents or school officials if they do not believe that the adults can and will do something. Adults must take action when people are not treated respectfully. Appropriate action in dealing with minor infractions can assist in the development of an atmosphere in which bullying of any type is less likely. Firm discipline is crucial, but sanctions need to be proportional to the offense. Farrington, D. P. & Ttofi, M. M. (2009). School-based programs to reduce bullying and victimization. Campbell Systematic Reviews, 6, 1 – 148. Kowalski, R. M., Limber, S. P., & Agatston, P. W. (2008). Cyber bullying: Bullying in the digital age. Malden, MA: Blackwell. Willard, N. E. (2007). Cyberbullying and cyberthreats. Champaign, IL: Research Press. Gail Cabral, IHM, Ph.D s a Professor of Psychology at Marywood University in Scranton. She is a developmental psychologist with an interest in peer relations and friendship, spirituality and aging. firstname.lastname@example.org
The following article provides information regarding the structure and functions of various cell organelles belonging to the eukaryotic cell. Eukaryotic cells are present in complex living organisms like animals, humans, and plants. They formed as a result of evolutionary changes that took lace in the prokaryotic cells. You can refer to the following image for understanding the cell structure. Structure and Functions If you happen to check the structure of eukaryotic cells under the microscope, you will find that they are made up of a number of cell organelles, which help in the smooth functioning of the overall cell. Essentially a part of all the plants, animals, fungi, algae, and protozoans, these diploid cells are 5 micrometers or more in diameter, and characterized by the presence of a nucleus, which is absent in the prokaryotic organisms. It is a distinguishing part of plant cells, and is absent in animals. It imparts rigidity. Its material is different for different plant species with cell shapes being elongated, oval, round, rectangular, or square-shaped. The outermost part of the cell is the cell membrane, which encloses all the cell organelles. Protecting the cell, providing rigidity, and controlling the flow of nutrients within the cells are important functions of the cell membrane. This liquid gel-like substance is called matrix, within which the cell organelles float and/or are embedded. It provides the right environment to carry out all the metabolic reactions. Eukaryotic cells are considered advanced and complex. The nucleus is made up of genetic material, i.e., the DNA (Deoxyribonucleic acid) and the chromosomes, owing to which it is considered as the brain of the cell. It basically controls all the cell functions, and guides it properly. The interior of the nucleus has a dark stained area called the nucleolus, which is responsible for protein formation. Peculiar to the eukaryotic cells, the main function of this membrane is to protect the nucleus by formation of a protective sheath around it. Nucleus is filled with this dense fluid that contains chromatin fibers, chromosomes, and genes that carry the genetic information. They are among the largest cell organelles present in the eukaryotic cells. They are characterized by their own Mitochondrial DNA, RNA, and ribosomes; and hence, can self-replicate. It is the key site for production of energy in the form of ATP molecules, and thus aids photosynthesis and respiration. Another peculiar organelle present in eukaryotic plant cells are the plastids. Photosynthesis is the unique process, by which plants prepare their own food with the aid of these organelles. Plants generally contain chloroplasts that are characterized by the presence of a green colored pigment called chlorophyll. They are essential for protein synthesis, which includes transcription and translation. All the ribosomes are of 80S type, except the one from mitochondria and plastids, which is of the 70S type. They mainly help to undertake phagocytosis, and promote intracellular digestion. They are also responsible for secretion of enzymes, which are necessary for breaking down the cell debris. Centrioles contained within the centrosomes are important for the process of initiation of cell division, the result being either mitosis or meiosis. Endoplasmic Reticulum (ER) These interconnecting flattened tubular tunnels are of two types: Rough Endoplasmic Reticulum (RER) and Smooth Endoplasmic Reticulum (SER). In combination with the ribosomes, they help in functions related to protein transport. ER is regarded as one of the most important cell organelles after mitochondria. Their function includes protein processing so that active protein chains are released whenever required. Alike in plants and animals, vacuoles are water-filled organelles responsible for storage.
Teacher Introduction Activities Teachers often have to step in to help their students get to know each other. Whether it's on the first day of the school year or on a day when a new student joins your class, you want to make sure classmates feel comfortable with one another. To help create a friendly environment, have the students introduce themselves through several entertaining activities. 1 Things in Common Begin this activity by having the students all face the front of the classroom. Pick one student out of the group to stand at the front of the room. Tell her to say one thing about herself, such as an experience like visiting the Grand Canyon or an interest like reading. If any of the students share the same experience or interest, they stand up and have to switch chairs with one another. The player at the front of the room tries to find a chair to sit in before all of the chairs are occupied. The person left standing has to share an experience or interest with the classroom just as before. Continue until everyone has had a turn. 2 Sign Here Bingo Prior to this game create a Bingo grid composed of five squares by five squares to create a total of 25 squares. Within each square, write an experience or interest the students may have. Examples include "I'm the only child in my family" or "I've lived in another country." Make sure that none of the Bingo cards have the same placement of these pieces of information. Give each of the students a Bingo card and a writing utensil. Tell all of them to start mingling, asking each other whether their experiences or interests match a square on their Bingo cards. When a player finds a match, the match signs his name on the corresponding square. The first player with five squares in a row (horizontally, diagonally or vertically) calls out "Bingo" and wins the game. 3 People through Pictures Play this introduction activity by first handing each of the players a few pieces of paper and some drawing utensils. Tell them all to draw some of their favorite things and places on pieces of paper, drawing one picture per page. Once the players have drawn about three different things on their papers, pick one of the students to hold up one of her papers. The other players guess what the picture represents. When they guess correctly, they learn more about the player. After you go through all of the first player's pictures, continue on to other students until everyone has had a turn.
In the future, tiny machines may swim through our bloodstream repairing damage, attacking invaders, or taking real-time readings. We might even model such machines on biology. But biological cells are incredibly complex microscopic machines—and the truth is, we’re only just now beginning to understand exactly how they work. One way to better understand how something works is to build a copy. In a recent paper in the journal Science, a team of scientists at the Technische Universität München (TUM) say they have done just that. To better understand how our own biological cells work, the team created artificial protocells in the lab that can not only change their shape but even move on their own. Quoting physicist Richard Feynman, Andreas Bausch, a cellular biophysicist at TUM, says, “What I cannot create, I cannot understand.” While we might be a long way from reverse engineering a heart or brain cell, the team took a back to basics approach. A few billion years ago, primordial cells were far simpler, consisting of little more than a membrane and a handful of molecules. TUM’s artificial protocells consist of a membrane shell, or vesicle, two kinds of biomolecules, microtubules and kinesins, and fuel in the form of ATP, the same energy source used in biological cells. The kinesin molecules are like little motors running on ATP—in this case, they line up the microtubules and keep them in place. This microtubule structure, or cytoskeleton, gives the protocell its shape. “One can picture the liquid crystal layer [of microtubules] as tree logs drifting on the surface of a lake,” explains Felix Keber, lead author of the study. “When it becomes too congested, they line up in parallel but can still drift alongside each other.” Although the log-like microtubules are mostly lined up, there are always a few faults, a few rebels that refuse to lay flat. Because the system is always internally in motion, these faults tend to migrate, and as it turns out, they help the protocell move. The scientists discovered that when they removed water from the protocell, the microtubule faults predictably deformed its shape—the more water they removed, the more the cell deformed, eventually growing spiked extensions like the ones used by biological cells to move around in their environment. From what at first appeared to be a random process, the researchers uncovered basic principles governing their protocell’s behavior. They hope these principles will underpin efforts to make predictions about other similar cellular systems. MIT chemical engineer, Bradley Olsen, told Popular Mechanics that such research may eventually lead to intelligent microscale machines—these machines, he says, may be used to repair tissue or deliver targeted cancer treatments. Before that happens, however, more research into cell mechanisms is needed—and to that end, he said, TUM’s artificial protocells are a major contribution. Learn more about the research at TUM’s News site, “Artificial cells take their first steps: Movable cytoskeleton membrane fabricated for first time.” Image Credit: Technische Universität München (TUM)
The Apollo astronauts knew that moon dust was troublesome stuff. Now that dust could limit our ability to find cracks in Einstein’s general theory of relativity. Many of our best tests of relativity come from lunar ranging experiments. Several times a month, teams of astronomers from three observatories blast the moon with pulses of light from a powerful laser and wait for the reflections from a network of mirrors placed on the lunar surface by the Apollo 11, 14 and 15 missions, as well as two Soviet Lunokhod landers. By timing the light’s round trip, they can pinpoint the distance to the moon with an accuracy of around a millimetre – a measurement so precise that it has the potential to reveal problems with general relativity. But now Tom Murphy from the University of California, San Diego, who leads one of the teams at the Apache Point Observatory in Sunspot, New Mexico, thinks the mirrors have become coated in moon dust. “The lunar reflectors are not as good as they used to be by a factor of 10,” he says. Photons gone missing The fainter light is a problem for lunar ranging experiments. Out of every 100 million billion (1017) photons Murphy’s team fires at the moon, only a handful make it back to Earth. Most of are absorbed by Earth’s atmosphere on the way to the moon and back, or miss the mirrors altogether. Murphy first suspected two years ago that the dust problem was cutting the light down even further. He was puzzled to detect far fewer photons than he expected, even when the atmospheric conditions were perfect. His team also saw a further drop when the moon was full and used to joke about the full moon curse. This gave Murphy some clues. He suspects that moon dust is either coating the surface of the mirrors or has scratched them. Both scenarios would increase the amount of heat the mirrors absorb, and so during a full moon, sunlight falling on the mirrors would heat them up and change their optical properties. As a result, the mirrors would not reflect light as efficiently. Even though the moon has no atmosphere, dust can be stirred up from the surface by the impact of micrometeorites. Traces of dust Murphy has scoured measurements stretching back to the 1970s and found that the problem first appeared between 1979 and 1984, and has been getting worse. However he is unwilling to predict if the mirrors will deteriorate further. The Apache Point experiment can still make measurements, but the degradation is a bigger problem for other lunar ranging experiments that use less powerful lasers. More measurements from different sites would improve the limits on general relativity. Murphy’s findings also highlight problems that astronomers might face if they ever build a telescope on the moon. The results were reported at the American Physical Society meeting in Washington DC and have been submitted to the journal Icarus.
There are a number of ways to prevent unauthorized access, or loss including: access controls, encryption, firewalls, antivirus programs, and backups. Once a user has been authenticated, the next step is to ensure that they can only access the information resources that are appropriate. This is done through the use of access control. Access control determines which users are authorized to read, modify, add, and or delete information. Several different access control models exist. Two of the more common are: the Access Control List (ACL) and Role-Based Access Control (RBAC). An information security employee can produce an ACL which identifies a list of users who have the capability to take specific actions with an information resource such as data files. Specific permissions are assigned to each user such as read, write, delete, or add. Only users with those permissions are allowed to perform those functions. ACLs are simple to understand and maintain, but there are several drawbacks. The primary drawback is that each information resource is managed separately, so if a security administrator wanted to add or remove a user to a large set of information resources, it would be quite difficult. And as the number of users and resources increase, ACLs become harder to maintain. This has led to an improved method of access control, called role-based access control, or RBAC. With RBAC, instead of giving specific users access rights to an information resource, users are assigned to roles and then those roles are assigned the access. This allows the administrators to manage users and roles separately, simplifying administration and, by extension, improving security. The following image shows an ACL with permissions granted to individual users. RBAC allows permissions to be assigned to roles, as shown in the middle grid, and then in the third grid each user is assigned a role. Although not modeled in the image, each user can have multiple roles such as Reader and Editor. Many times an organization needs to transmit information over the Internet or transfer it on external media such as a flash drive. In these cases, even with proper authentication and access control, it is possible for an unauthorized person to gain access to the data. Encryption scrambles data so that it is unreadable to those without a public key, which unlocks or decrypts the data. This is referred to as symmetric key encryption where both parties share the encryption key. Encryption makes information secure as the message is sent in code and appears to those without the public key as a random series of letters and numbers. An alternative to symmetric key encryption is public key encryption. In public key encryption, two keys are used: a public key and a private key. To send an encrypted message, you obtain the public key, encode the message, and send it. The recipient then uses their private key to decode it. The public key can be given to anyone who wishes to send the recipient a message. Each user simply needs one private key and one public key in order to secure messages. The private key is necessary in order to decrypt a message sent with the public key. Notice in the image how the sender on the left creates a plaintext message which is then encrypted with a public key. The ciphered text is transmitted through the communication channel and the recipient uses their private key to decrypt the message and then read the plain text. Firewalls are another method that an organization can use for increasing security on its network. A firewall can exist as hardware or software, or both. A hardware firewall is a device that is connected to the network and filters the packets based on a set of rules. One example of these rules would be preventing packets entering the local network that come from unauthorized users. A software firewall runs on the operating system and intercepts packets as they arrive at a computer. A firewall protects all company servers and computers by stopping packets from outside the organization’s network that do not meet a strict set of criteria. A firewall may also be configured to restrict the flow of packets leaving the organization. This may be done to eliminate the possibility of employees watching YouTube videos or using Facebook from a company computer. A VPN allows a user who is outside of a corporate network to take a detour around the firewall and access the internal network from the outside. Through a combination of software and security measures, a VPN provides off-site access to the organization’s network while ensuring overall security. Antivirus programs are software that can be installed on a computer or network to detect and remove known malicious programs like viruses, and spyware. While antivirus programs provide some protection they are a reactive defense in that they must first understand what to look for. Another essential tool for information security is a comprehensive backup plan for the entire organization. Not only should the data on the corporate servers be backed up, but individual computers used throughout the organization should also be backed up. A good backup plan should consist of several components. - Full understanding of the organization’s information resources. What information does the organization actually have? Where is it stored? Some data may be stored on the organization’s servers, other data on users’ hard drives, some in the cloud, and some on third-party sites. An organization should make a full inventory of all of the information that needs to be backed up and determine the best way to back it up. - Regular backups of all data. The frequency of backups should be based on how important the data is to the company, combined with the ability of the company to replace any data that is lost. Critical data should be backed up daily, while less critical data could be backed up weekly. Most large organizations today use data redundancy so their records are always backed up. - Offsite storage of backup data sets. If all backed-up data is being stored in the same facility as the original copies of the data, then a single event such as an earthquake, fire, or tornado would destroy both the original data and the backup. It is essential the backup plan includes storing the data in an offsite location. - Test of data restoration. Backups should be tested on a regular basis by having test data deleted then restored from backup. This will ensure that the process is working and will give the organization confidence in the backup plan. Besides these considerations, organizations should also examine their operations to determine what effect downtime would have on their business. If their information technology were to be unavailable for any sustained period of time, how would it impact the business? Additional concepts related to backup include the following: - Uninterruptible Power Supply (UPS). A UPS provides battery backup to critical components of the system, allowing them to stay online longer and/or allowing the IT staff to shut them down using proper procedures in order to prevent data loss that might occur from a power failure. - Alternate, or “hot” sites. Some organizations choose to have an alternate site where an exact replica of their critical data is always kept up to date. When the primary site goes down, the alternate site is immediately brought online so that little or no downtime is experienced. As information has become a strategic asset, a whole industry has sprung up around the technologies necessary for implementing a proper backup strategy. A company can contract with a service provider to back up all of their data or they can purchase large amounts of online storage space and do it themselves. Technologies such as Storage Area Networks (SAN) and archival systems are now used by most large businesses for data backup.
East vs. West: Why earthquakes are felt differently on either side of the US While you may think quakes are a western US problem, some of the largest temblors in US history have happened in the East According to the U.S. Geological Survey, there are about 20,000 earthquakes each year around the world. In the U.S., a majority of the shaking happens in the western regions of the country. However, a look at history shows there has been plenty of shaking in the East, as well. According to Thomas Pratt, Central and Eastern region coordinator at the USGS Geologic Hazards Science Center, four of the 10 largest earthquakes recorded in the Lower 48 happened in the East. "There were three large earthquakes in the New Madrid seismic zone in the Mississippi Valley in 1811-1812, and there was a magnitude 7 in Charleston, South Carolina, in 1886 that caused extensive damage," Pratt said. "So, we do get some large ones in the East." When those quakes happen in the East, they can be felt hundreds of miles away from the center of the shaking. Whereas in the West, the tremors are usually felt much more closely to the epicenter. The map below shows two quakes of similar magnitudes and where the shaking was felt, based on a USGS metric called "Did you feel it?" It shows the differences in how far earthquakes are felt from the center in the East versus the West. Pratt said those differences come from the types of rocks that make up the ground on either side of the country. "Earthquakes are felt a lot more widely in the eastern U.S. because the rocks are much older, they're much harder, and they transmit energy much more efficiently than they do in the western U.S." Pratt said. He said you can think of the ground in the East as a granite countertop in a kitchen, while the ground in the West is more akin to a sponge. "Think of the western U.S. more as a stiff soil where you can still pick at it with a shovel and things like that," Pratt said. "It just doesn't transmit the energy, and so, when you get the same sized earthquake in both places, you feel it far, far more widely." Pratt said that wider area of shaking can, in principle, equate to more damage caused by an eastern U.S. earthquake. However, the differences in soil alone doesn’t mean more damage. "A lot of buildings in eastern U.S. are not built to withstand earthquakes, whereas almost everything built in the western U.S., now, is built with pretty strict earthquake standards," Pratt said. "There's a lot of older buildings in the eastern U.S. that do not have any kind of seismic strengthening at all, and so, one of the concerns we have is a large earthquake underneath something like New York or Boston or Charleston, South Carolina, would cause far more damage than something underneath Los Angeles." Eastern earthquakes are still a mystery Another difference in quakes on either side of the U.S. is that scientists are not quite sure why they happen in the East at all. "The most active ones tend to be at the plate boundary, specifically California, Washington, Oregon, Alaska, where you've got the Pacific plate rubbing against the North American plate," Pratt said. "In the eastern U.S. it's more of a mystery, and we really don't know why there's earthquakes in the U.S., because that should be a stable continental plate without a lot of motion across the fault." Some of the most seismically active places in eastern U.S. are Charleston, South Carolina, Central Virginia, parts of New England and the New Madrid region along the Mississippi River. A map of the zones can be found here. Pratt said there has also been an increase in manmade seismicity in the past couple of decades in Kansas, Oklahoma and Texas, which is caused by the harvest of oil and natural gas. "What happens is that the rocks are very close to failure, and in just a small perturbation from fluid injection is enough to trigger an instance," Pratt said. Preparing for shaking Pratt said preparing for an earthquake is similar to preparing for other natural disasters with which people in the East are much more familiar. "If there is a large earthquake, it may be a day or two before rescue operations can get to you, so make sure you've got a couple of days’ worth of food and water, and that's true for any natural hazards like hurricanes, tornadoes, etc." Pratt said. Pratt said people should also ensure furniture like bookshelves are bolted to the wall and that large objects aren’t placed high up on shelves. He said people should also make sure their homes are bolted to their foundations.
TORONTO – A research team out of Western University has helped find evidence of past microbial life in a German meteorite crater that they hope will help scientists find evidence of past life on Mars. The crater, Nordlinger Ries, is about 14 million years old, relatively young when it comes to meteorite craters. “This is like a window into what would have happened or could have happened four billion years ago on Earth,” Gordon Osinski, part of the Western research team, told Global News. Osinski is the associate director at the Centre for Planetary Science and Exploration at Western University as well as the Director of the Canadian Lunar Research Network. “One of the issues when we talk about early life on Earth or Mars is that there are no rocks older than about four billion years on Earth,” said Osinski. “And so, we don’t know really where or when life evolved on Earth.” When a meteorite impacts a planet or another body, extreme heat is released, creating glass. Together with water, the immediate area forms a hydrothermal system, with hot water rising to the surface and cooling, and then the colder water sinking. The area becomes very conducive to life. Upon studying the rocks found at Nordlinger Ries, the researchers found extremely small tubular features — about one-millionth to three-millionth of a metre in diameter. They believe that these trace fossils lived in the newly created meteorite impact glass, eating their way into the glass over hundreds of thousands, if not millions of years, leaving behind what is essentially a footprint. “The really neat thing about impact craters is that every object in the solar system with a solid surface, whether it be ice or rock, has been struck in the past,” Osinski said. That, he said, means that Mars — believed to have once had flowing water — could have harboured similar hydrothermal systems, and therfore produced microbial life. Osinski believes that, when most people think about impact craters, images of mass destruction are called to mind. However, the impacts can be beneficial to microbial life. “On early Earth and Mars where they were quite inhospitable environments, these craters would have been protective little oases where life would thrive and evolve.” “If we’re looking for life on Mars, that’s what we’re looking for.” Using the Mars Reconnaissance Orbiter, they know that there are areas that hold more promise than others, mainly due to the formation of clay. “There are a couple of craters…where there does look to be evidence for hydrothermal alteration,” said Osinski. Though challenging to find areas that may hold the most promise for signs of hydrothermal alteration, there are some prime locations. They look for clays that on Earth need water to form. “There’s morphological evidence for what looks like hot spring deposits,” Richard Leveille research scientist at McGill University told Global News. Leveille is also on the Mars Science Laboratory, otherwise know as Curiosity, team. So why hasn’t Curiosity found any evidence? “Curiosity is in a crater right now, so there is interest in finding possibly hydrothermal minerals,” said Leveille. “But the problem is, Curiosity is smack in the middle of the crater at a low elevation. And a lot of the hydrothermal systems that develop in a crater are located around the rim or near the rim of the crater, and Curiosity is never going to get there.” Leveille said that this research is promising, and that because it’s difficult to find these trace fossils from afar, a future mission that returns some samples to Earth may need to be considered.
The first instance of recording Scripture occurs at Moses’s hand at the covenant’s inauguration between YHWH and Israel (Exod. 24:4–8). Scholars mostly agree that the Book of the Covenant mentioned there entailed chapters 21–23 of Exodus, but opinions vary.1 Moses’ upbringing in Egypt explains how he became a scribe in the first place,2 because they placed a high amount of esteem and respect on the scribe. They believed that a scribe was his own boss and the highest of trades to which one could aspire.3 Moses obviously had scribal training in Egypt in the first forty years of his life in the higher echelons of society, and that skill would serve him well as the leader of Israel. Even in Israel’s later history, we see the scribe as one moving in royal circles ( 2 Chron. 24:11; Esth. 3:12). The scribal chamber was within the palace (Jer. 36:12), and their work often detailed the exploits of the monarchs they served (1 Kings 11:41) as well as the reign of the monarchy itself (1 Kings 14:19, 29). They also served by writing the decrees ordered (Dan. 6:8) and taking dictation (Jer. 36:32). Some might be sent to record the military skirmishes the realm was engaged in (Jer. 52.25), and a useful skill for the scribe to possess in later times was to be bilingual (2 Kings 18:26).4 Following the station of Moses as a prophet were other prophets who recorded books or records here or there (Josh 24:26; 1 Sam 10:25). Later, we even read about some later holy people referring to what had been written (Dan. 9:2; Neh. 8:1). This process led what we know as the Old Testament to be formed around 400 BCE,5 with some arguing that the Law, or Pentateuch (first five books of the Bible), itself was authoritative by that time if not earlier. By 200 BCE or earlier, the prophets were canonized (cf. Is. 34:16; Jer. 36:6ff).6 Unlike our Christian Bibles where the Old and New Testaments are major divisions, the Hebrew Bible (Tanakh) grouped its books differently. There are three groupings of books: - The Law (Torah) - Prophets (Nevi’im) - Josh, Judg., Samuel and Kings (Former Prophets) - Isaiah, Jer., Ezek., and the Twelve (Latter Prophets) - Writings (Ketuvim). - Song of Songs, Ruth, Lamentations, Ecclesiastes, and Esther (Five Megillot) - Daniel, Ezra and Nehemiah, and Chronicles. This tripartite division is reflected from Ben Sira, who was the first to refer to it in this way (180–175 BCE), but may be earlier than him. Centuries before this time, King Josiah (622 BCE) found a copy of the Law in the temple, and his subsequent reverence of it as such demonstrates its authority in the life of Israelite society (2 Kings 22:3–20). After captivity, Ezra had a copy of the Law in which to lead the nation (Ezra 7:6; Neh. 8:1ff). Centuries before them, Joshua (13th century BCE) read the same (Josh 8:34–35), and King David was to have had a personal copy (Deut. 17:18–20; cf. Deut. 31:9, 25–26). We know David consulted it after Uzzah died (2 Sam. 6:1–10; cf. 1 Chron. 15:1–13), but it’s obvious that it wasn’t central at all times. The interlude from the reading of Joshua until the next reading is a noted period of silence of public readings. During that time, the united kingdom of Israel was divided, and the northern kingdom following an idolatrous path while the southern kingdom sinned as well, but with periods of reformation. The next public reading came after the high priest Hilkiah found the Book of the Law in the temple during the reign of King Josiah of Judah. Hilkiah took the book to the king’s secretary who then took it to the King. Upon hearing the words of the Book of the Law, King Josiah grieved and sent to inquire of the Lord because all the curses of the book were to be rendered to the unfaithful people of Judah (2 Kings 22–23; 2 Chron. 34). When Josiah assembled the people to have the Book of the Law read in their hearing, Josiah led a covenant renewal to which the people consented. However, because of so many years of apostasy that began with King Solomon, changing the trajectory of Judah was unrealized because of so many years of neglecting to read the Law. Therefore, the land was purged of its inhabitants so that it could undergo a period of cleansing (cf. Lev. 18:28; 20:22). This points us to the authority the Law and Prophets had. What we find was that those who were well regarded, adhered to the Law. We also note that the absence of it from the life of Israel resulted in an ignorance that permitted apostasy. 1 Robert Alter, The Five Books of Moses: A Translation with Commentary (New York: W. W. Norton & Company, Inc., 2004), 456. 2 The education of Moses is recorded in Philo, De Vita Mosis I 20–24, 32. 3 Christopher A. Rollston, Writing and Literacy in the World of Ancient Israel: Epigraphic Evidence from the Iron Age (Atlanta: Society of Biblical Literature, 2010), 87. 4 Ibid., 88–89. 5 Neil R. Lightfoot, How We Got the Bible, rev. ed. (Abilene: Abilene Christian University Press, 1986), 8. 6 Jack P. Lewis, Between the Testaments (Nashville: 21st Century Christian, Publishing, 2014), 92–93.
Ages: 4 and up Grades: PreK and up How It Works: Individual picture cards can be used on their own or as a complete picture scene set to target many language goals including sentence formulation, use of nouns, verbs and verb tenses, pronouns, adjectives, prepositions, temporal concepts, main idea, exposure to basic concepts, and more. Below are a few different ways Spark Cards can be used, and you can create your own ideas as well! - Circle objects with target sounds in the picture scenes. Practice the words or phrases. Have the child tell a story about the scenes. - Increasing Vocabulary: Label basic vocabulary words and objects in pictures. Describe, categorize and expound on the concepts. - Sentence Structure: Have the child formulate a sentence for each picture card. Give the child a word to use to formulate sentences. Work on increasing MLU by increasing the length and complexity of the sentences. - Telling Narratives: Teach a child how to tell a story. Focus on story elements such as character, setting, plot, climax, and conflict. Have the child formulate sentences, use conjunctions and transitional words to tell the story in sequential order. - Main Idea: Have the child identify a title for a set of story cards and/or tell a main idea for each picture illustration. - Use of Pronouns: Circle the characters in the picture cards. Have the child practice using pronouns correctly when formulating sentences about the characters. - Verb Usage: Line up several cards and work on past, present, future tense verbs. Work on verbs, plurals, and irregular plurals while describing the picture cards and formulating sentences about them. - Sequence It: Have the child set the cards up in the correct order using inferencing skills to infer the correct order of the cards. - Answering ‘wh’ Questions: Using the prompt cards, ask ‘wh’ questions such as ‘who,’ ‘what,’ ‘when,’ ‘where,’ ‘why,’ and ‘how.’ - Emotions: Use a marker to make thinking bubbles and/or speech bubbles to illustrate what the characters might be thinking, saying and feeling. - Problem Solving: Work on cause and effect and possible solutions to solve the problems depicted in the pictures. What’s In The Box: Spark Holiday Sequencing Cards includes 8 story sets with 6 cards in each set. - Valentine’s Day - Decorating Easter Eggs - July 4th Picnic - Halloween Costume - Halloween Pumpkin - Baking Christmas Cookies - Decorating A Christmas Tree These high-quality cards are child friendly and made of strong card stock so they won’t rip. Large 4.5″ x 4.5″ cards for easy viewing. Each box contains an answer key with a question guide for each picture. TIP: Use a dry-erase marker to highlight picture details while working with your child. Dry-erase markings are easily erasable so cards can be used again and again. Target Age Range: 4 and up Adaptable for all ages! Spark Cards are a perfect speech therapy game for classroom and center-based activities, individual therapy sessions, and at home with parents and the entire family. The skill level of cards is adaptable to many levels by simply trimming or using all the cards in a set. To achieve higher level skills and critical thinking, use all 6 picture cards as a set. The sets can also be trimmed to 3-4 picture scenes by removing 2-3 picture cards for lower-level sequencing and story retelling. The complexity of the questions can also be adjusted according to the age and skill level of the child. There are no reviews yet.
ValueEthical & Social LocationIndoor, Online, Outdoor Recommended group size10-20 Recommended time / Minutes30-60 |Overview||Generally, dialogue is a conversation between two or more people; a conversation, negotiations between two or more groups to solve a problem, to resolve disagreements, etc. In philosophy, a dialogue is conducting a conversation to show a problem and come to a solution. We encounter the dialogical form of cognition first in Socrates, then in the Sophists and Plato. Plato, through his interpretation of thought as a conversation of the soul with itself, developed a dialogue into perfection. In contemporary philosophical usage, a dialogue implies mutual communication between people that leads to a common meaning, and which cannot be reduced to any of the participants in the conversation.| |Learning objectives||Analyze children’s attitude towards animals’ needs and feelings in order to understand animal rights.| |Skills developed||Critical thinking Self-reflection| |Method||Socratic dialogue, philosophical dialogue| |Materials||Indoors (classroom): Black or white board, chalk or markers Online: computer, cloud platform for video and audio conferencing e.g., Zoom; Outdoors: flipchart and markers| Before you start, make sure that children are comfortable and relaxed. ● They can sit however they want, but they need to be aware of your (facilitator’s) presence at every moment. ● If you don’t know the children you are working with, start the workshop by presenting yourself and getting to know them. ● The questions below are an example of how to conduct a dialogue. Make sure to adjust the questions and the duration of the activity to the age of the children you work with. ● Present to the children the rules of participation in the workshop: ⮚ If you want to say something, you need to raise your hand. ⮚ You need to listen to the others very carefully, because it is very important to follow the discussion. ⮚ Think about the topic of the discussion and express your opinion. Start the workshop by asking the questions: ● How many animal species do you know? Let them give brief answers. ● Ask children to name their favourite animals. Each child should choose just one animal (it could be a wild or domestic animal or a pet). Encourage children to choose different animals, not just their pets (e.g., dog or cat) ● They should one by one say what their favourite animal is and briefly explain why. ● Every proposed animal, that is not repeated, needs to be written on the board. After all answers are written ask the questions: ● Is your favourite animal wild, domestic or a pet? ● What is the difference between wild, domestic and pet animalS? If you see that children have difficulties knowing the differences, explain them to them. After the explanation (if necessary) start the discussion with questions. ● Who divided animals into wild, domestic, and pets? ● Why does this division exist? ● Does this division mean anything to the animals? Why? Encourage children to answer the questions. After they answer, continue the discussion with a new set of questions. ● What are the basic life needs of your animal? e.g., food, water, breathing, movement ● What are your basic life needs? e.g., food, water, breathing, movement Encourage children to answer the questions. Write down the answers for each question in two columns: Animal needs and human needs. Next, continue the discussion about emotions by asking such questions. ● Do you feel fear, sadness, or happiness? ● Can you recognize fear, sadness or happiness in a person? How? (e.g., facial expressions, voice, behaviour…) ● Do animals feel fear, sadness or happiness? ● Can you recognize fear, sadness or happiness in an animal? How? (e.g. facial expressions, voice, behaviour…) Encourage children to answer the questions. Write down the answers given to the questions. How? In two columns: Animal emotions and human emotions. Next, compare the columns by asking questions. Do you have more similarities or differences with animals in terms of living needs? Why? Do you have more similarities or differences with animals in terms of feelings? Why? Based on comparison, ask the final questions for discussion. ● What’s the difference between animals and humans? ● Do you have rights? Which? ● Do animals have rights? Which? ● Are the rights of animals and people the same? Why? ● Should animals have greater rights? ● Will we be better persons if we understand and respect both human and animal rights? Why? Having a final and precise definition is not the main purpose of the activity! The aim is that children think about animal rights. |TipsAdditional materialsHow to apply online?What to do at home?| Written by Reich (2003) for Socratic method, can be well used in this workshop: ● Look for a suitable space and create a welcoming environment ● Learn the children’s names and have them learn each other’s names ● Explain the ground rules ● Ask questions and be comfortable with silence. Silence is productive. If nobody replies, rephrase your question after a while. ● Create what Reich calls a “productive discomfort”. Do not remove discomfort immediately because this is how independent learning feels like. Allow participants to gain comfort with ambiguity. ● Welcome new differences ● Do not reject “crazy ideas” since they can offer a new perspective but discourage ideas that are an attempt to escape engagement. ● Above all else, use follow-up questions to clarify points in the answer to a previous question ● As a facilitator, be open to learning something new. How to apply it online? For the online version of the workshop you can create a Word document and share your screen so everyone can see it. ● Once they have completed the discussion, divide the children in groups and instruct them to create a list of rights that would protect their selected animals. Children should think of and write down at least five rights that would protect their selected animals on a separate, large piece of paper, leaving enough room on it to stick pictures of names of the animals on it afterwards. These rights could also be very specific and children must be encouraged to think from the perspective of the animals involved. ● One representative from each group (or groups as a whole) will then present the poster with the list of rights to everyone. After each presentation, you can involve children in a discussion. Why have they chosen these rights? How and why are they important? Are they important for every animal that their group represents? Are they important for animals in general? Are they relevant to humans too? ● Find room for the posters with the lists of rights in the classroom or in the school hallway and display them there so that others can observe them. What to do at home? Instead of group work in the classroom, children can make posters at home and bring them to school. |Author||Marija Kragić, Ivana Kragić, Bruno Ćurko.|
You’ve seen it on the news: Wildfires continue to pop up on the West Coast as drought conditions in states such as California, Oregon, and Arizona persist, and the devastating Australian bush fires are finally subsiding. Many scientists and mainstream news sources are quick to draw false conclusions that climate change is causing these wildfires or at least making them worse than they normally would be. In reality, climate change has nothing to do with these disasters. Let’s start with California as a case study. Humans cause 95% of fires the California Department of Forestry and Fire Protection responds to. In fact, the wildfires that caused California’s recent forced blackouts were caused by missed safety upgrades to their electric grid. Their severity is primarily due to the suppression of smaller fires and a decline in responsible forest management practices, not due to unusually hot or dry weather. Many allege that the intensity and number of droughts will only increase in coming years due to climate change caused by greenhouse gas emissions. However, the United States as a whole has seen no discernible increase in overall extreme dry or wet conditions over the past 50 years. Overall, the lower 48 has been wetter since 1980 than it was during the first part of the 20th century. In fact, the most widespread droughts in the mainland United States occurred during the 1930s and 1950s. This period, often referred to as the Dust Bowl, occurred prior to the significant increase in the amount of greenhouse gas emissions around the world. If climate change is indeed causing droughts, why is the country not experiencing worsening drought conditions as temperatures have risen since then? Similarly, while some claim that climate change causes more frequent severe storms, extreme weather has not been on the rise in recent years. The number of hurricanes that have made landfall in the United States each year since the late 1800s has been quite stagnant. Since the late 1930s, the 36 costliest U.S. hurricanes clearly represent a downward trend in intensity. There is also no evidence that storm intensity has increased over the years. The Accumulated Cyclone Energy (ACE) Index — which measures cyclone strength, frequency, and duration — shows that tropical cyclone intensity reached during the 1950s and 1960s is almost the same as during the 1990s and 2000s. If climate change was really exacerbating the severity of natural disasters, why are we not seeing an increase in cyclone activity and strength? Many also claim that hurricanes and other natural disasters are getting worse are calculated using the cost of damages. Although the cost to rebuild after severe storms has increased significantly over the past several decades, the magnitude of severe storms has not. As society evolves, cities develop more densely, and more people move to coastal regions (with 3.6 million new acres of coastal land developed between 1996 and 2010), it understandably becomes more expensive to repair high-end buildings and infrastructure. What matters more in this conversation is the human cost of storms — and there’s good news to celebrate. Fewer people than ever lose their lives to climate-related natural disasters, with global deaths declining 98.9% in the last century. Contrary to the popular narrative, humanity is becoming more resilient to severe weather, not less. Disaster resiliency and preparedness are serious concerns our elected leaders should prioritize. However, recent data does not support the assertion that greenhouse gas emissions will worsen severe weather. The destruction inflicted by disastrous storms is not manmade, but rather part of natural weather cycles. Such claims are simply another misleading tactic to spread climate alarmism and promote ineffective and expensive government mandates to reduce greenhouse gas emissions.
- 1 What date does the Electoral College vote in 2020? - 2 How are Electoral College votes distributed? - 3 When was the first electoral college vote? - 4 Does popular vote determine electoral vote? - 5 How are electoral votes per state? - 6 Is the Electoral College in the original Constitution? - 7 Has any president run unopposed? - 8 What happens if you don’t get 270 electoral votes? - 9 Do all of a states electoral votes go to one candidate? - 10 What has been the closest presidential election? What date does the Electoral College vote in 2020? December 14, 2020—electors vote in their States A set of electoral votes consists of one Certificate of Ascertainment and one Certificate of Vote. How are Electoral College votes distributed? Currently, there are 538 electors, based on 435 representatives, 100 senators from the fifty states and three electors from Washington, D.C. The six states with the most electors are California (55), Texas (38), New York (29), Florida (29), Illinois (20), and Pennsylvania (20). When was the first electoral college vote? It was held from Monday, December 15, 1788, to Saturday, January 10, 1789, under the new Constitution ratified in 1788. George Washington was unanimously elected for the first of his two terms as president, and John Adams became the first vice president. Does popular vote determine electoral vote? When citizens cast their ballots for president in the popular vote, they elect a slate of electors. Electors then cast the votes that decide who becomes president of the United States. Usually, electoral votes align with the popular vote in an election. How are electoral votes per state? Electoral votes are allocated among the States based on the Census. Every State is allocated a number of votes equal to the number of senators and representatives in its U.S. Congressional delegation—two votes for its senators in the U.S. Senate plus a number of votes equal to the number of its Congressional districts. Is the Electoral College in the original Constitution? The Founding Fathers established the Electoral College in the Constitution, in part, as a compromise between the election of the President by a vote in Congress and election of the President by a popular vote of qualified citizens. However, the term “electoral college” does not appear in the Constitution. Has any president run unopposed? Taking place at the height of the Era of Good Feelings, the election saw incumbent Democratic-Republican President James Monroe win re-election without a major opponent. It was the third and last United States presidential election in which a presidential candidate ran effectively unopposed. What happens if you don’t get 270 electoral votes? A candidate must receive an absolute majority of electoral votes (currently 270) to win the presidency or the vice presidency. If no candidate receives a majority in the election for president or vice president, that election is determined via a contingency procedure established by the 12th Amendment. Do all of a states electoral votes go to one candidate? Electors. Most states require that all electoral votes go to the candidate who receives the most votes in that state. After state election officials certify the popular vote of each state, the winning slate of electors meet in the state capital and cast two ballots—one for Vice President and one for President. What has been the closest presidential election? The 1960 presidential election was the closest election since 1916, and this closeness can be explained by a number of factors.
The spread of infectious disease through bugs and pests can be largely attributed to two specific types: ticks and mosquitoes. The prevalence of disease-ridden pests is largely dependent on the climate of the area, meaning changes in the climate can affect the chances of being infected by these pests. Here is a breakdown of which diseases are carried by which pest and how they are affected by climate change. Ticks are most commonly known for transmitting Lyme disease, but also carry more than a dozen diseases in the United States, including spotted fever, Heartland virus, and Powassan. Lyme disease has been increasingly prevalent over the last 30 years, with the number of cases more than tripling in that time period. Even areas as far north as Maine and Minnesota have seen an increase in tick population. This increase and spread of tick-borne diseases can be partly attributed to a warming climate. Ticks feed on the blood of about three animals over the course of its two-year life cycle. However, ticks can only search for their prey when the weather is warmer, and are forced to become immobilized in burrows or other warmer areas during freezing winter temperatures. Several decades ago, many places in the U.S. didn’t have long enough summers to keep the ticks well-nourished and healthy, and because of this, tick-borne diseases weren’t an issue in the colder northern states. However, with spring arriving even earlier in many northern areas, ticks are surviving in more places and contributing to large outbreaks of Lyme disease in the summer where there had previously been no issue. Mosquitoes are also capable of transmitting a number of diseases, including malaria, dengue, West Nile virus, Zika, and chikungunya. Similar to ticks, mosquitoes also thrive in warm weather and become inactive at low temperatures. Particularly cold winters can actually wipe out certain species, including ones that spread dengue, yellow fever, and Zika. However, warm weather doesn’t necessarily mean an increase in mosquito-borne illnesses; hotter weather could actually decrease the number of cases in certain areas. When a mosquito bites an infected host, the pathogen must go through an incubation period within the mosquito that lasts from days to over a week before it can be retransmitted to another host. The amount of time depends on the outside temperature, with warmer weather allowing for faster incubation. However, mosquitoes have such a small life span, which becomes even shorter when temperatures get hotter. This creates a small window of time where the pathogen incubates and the mosquito remains alive to transmit it, so in certain areas increased temperatures can kill off the mosquitoes faster than the disease is incubated. This, however, causes concern in certain areas where the temperature is currently just too low for mosquitoes to thrive in, as a slight increase in temperature will allow them to fall within that window to be able to incubate the parasites while remaining alive. Have a tick or mosquito problem? Protect your house and property with Tick Killz, an all-natural pest killer that targets pests like ticks, mosquitoes, fleas, and other nuisance insects. Visit our website to find Tick Killz at a retailer online or near you.
One of the unique aspects of Hudson River Park is that it is made up of 400 acres of water—the Hudson River! To protect these waters, HRPK scientists pay close attention to environmental conditions and regularly monitor the River. You can check out real-time updates on Hudson River conditions through the Hudson River Environmental Conditions Observing System (HRECOS) webpage. Due to the impacts of climate change, the Hudson River has seen shifts in the local environment. For example, when extreme weather events like Hurricane Sandy occur, the Park is one of the first areas to flood, creating negative effects to the shoreline, habitat and water quality. This week’s lesson focuses on building solutions to fight climate change impacts. Explore how adaptation methods are used and try your hand at designing structures that are commonly applied in the real world to protect our shorelines! Theme: Climate Change, Sea Level Rise, Waterfront Community, Carbon Footprint Ages: 6th–12th grade Prep Time: 5 minutes Activity Time: 30-45 minutes Resource: Climate and Our Coast Worksheet
The term “outcome” is used extensively in the world of business, industry, and medicine. Business and industry set outcome expectations and work to achieve them. The medical field has used an outcomes based model for many years for teaching, testing, and medical practice. The outcomes and outcome-based assessment concept has slowly found its place in education, with institutions, accreditation agencies, and governing agencies demanding clear learning outcome statements and valid measurement of the outcomes. This article presents an overview of student learning outcomes (SLOs). Student Learning Outcomes Although we may not state it formally, we all deal with outcomes in our lives every day. If we exceed the road or highway speed limit, the outcome could be a heavy fine, or worse, the outcome could be an accident in which someone is injured. So how did we know what the “expected” outcome was? A sign was posted with the speed limit printed on it with the expectation that it would help ensure driver safety. The speed limit sign was the outcome statement, and it was specific. A broader outcome statement, typically called a Goal, would be “Drive Safely,” and would be open to interpretation, and thus would require additional specific information for clarity. When we set goals for ourselves, we are expecting and hoping for a successful outcome. When we set goals for our students, we are expecting and hoping for a successful learning outcome for them. When course instructors determine what learning outcomes they expect students to achieve, they typically express the expectation as what the student “will” or “will be able to” know and do. The goal and outcome statements typically imply that 100% achievement is expected, but we all know that students are going to have degrees of success at attaining the outcomes. At all levels of education, we need to have student learning outcomes that are specific enough to be measurable with some form of assessment. These specific learning outcomes are then what is taught and what we expect students to accomplish. When we state learning outcomes in broad terms, they typically are called Goals, and are not specific enough to be able to write test items for an assessment or to design a rubric around. Breaking the Goals down into sub-goals or General Learning Outcomes (GLOs) as suggested by Carriveau (2016), helps to clarify the intent of the Goal statements. Typically, rubrics used to score written responses and other performance responses use GLO level statements for the expected outcomes for the rows of the rubric. More specificity is required in order to write selected response test items, such as multiple-choice items, so the GLO level statements are broken down into Specific Learning Outcomes (sLO) statements. The sLO statements can also be the row outcomes on a rubric when only one GLO is being measured. If the test items are already available for the course and are acceptable, then it is possible to create the sLOs to match the items and then code the sLOs to the broader GLO and Goal statements. The three-level outcomes model (Carriveau, 2016) can be constructed from the Goal level down or from the sLO level up. Measurement and Assessment in Teaching (8th ed.) by Lynn & Grunlund (2000) is an excellent text on the connection of assessment and teaching. Their outcome levels are: Major Categories, General Instructional Objectives, and Specific Learning Outcomes. Their examples for objectives and outcomes in the appendix begin with verbs rather than “the student will.” They use “ability to” in their examples of complex learning outcomes measured by essay questions (p. 240) The Quality Matters™ (QM) design standards for certification of online and blended courses requires specific level outcome statements for writing assessment items, which they call “Objectives” in their course design model. QM distinguishes between Learning Objective and Learning Outcome as: - Learning Objective - “a statement of the specific and measurable knowledge, skills, attributes, and habits learners are expected to achieve..." - Learning Outcome - “a demonstration of the actual level of attainment of the knowledge, skills, attributes, and habits expected as a result of the educational experiences” (Quality Matters, p 7). UNT CLEAR course design specialists use a modified QM form for approving and ensuring quality UNT online courses. Communicating and Reporting Outcome Attainment The question you may be asking yourself is what value are the GLOs and Goal statements in the three-level model if what is taught and measured is at the sLO level? The answer is that outcome attainment values calculated on student responses at the sLO level can be averaged and mapped up to GLO and Goal levels, whether from selected response items (like multiple-choice) items or from constructed response items scored with a rubric (e.g. written response). How well the class did as a whole on the GLO and Goal statements can be used for reporting and communication purposes. The GLO level values may be of interest to Chairs or Deans. The Goal level values may be of interest to directors and administrators responsible for institutional level reports. Of course, the teacher will be primarily concerned with how students did at the sLO level, which is also what the students are interested in. The sLO level results, including specific items, is what should be communicated to students, particularly when the results are from formative assessments. Carriveau, R.S. (2016). Connecting the Dots, Developing student learning outcomes and outcome based assessments. (2nd ed.) Stylus Publishing, Inc., Sterling, VA. Linn, R.L. & Gronlund, N.E. (2000). Measurement and assessment in teaching (8th ed.). Upper Saddle River, NJ: Prentice Hall. Quality Matters (2014). Quality matters higher education rubric workbook (5th ed.) Maryland: Quality Matters.
This Month in History: ULTRA's Big Break During World War II Nazi Germany primarily encoded its messages through the use of what was known as the "Enigma" machine. Enigma’s use by the Wehrmacht stretched back to the early 1930s and originated from a design created by Hugo Alexander Koch in the Netherlands. Although heavily modified prior to the Second World War’s onset, the Enigma machines used in 1939-1945 remained similar to Koch’s prototypes from two decades prior. For a variety of reasons the Enigma machine was extraordinarily difficult to crack. At it's most basic level breaking Enigma required two things: gathering tremendous amounts of closely guarded information combined with access to an Enigma machine with the proper settings. As to the first element, during the 1930's Poland did much of the heavy lifting in terms of laying the groundwork for breaking the Enigma code. This included creating the “Bomba”, a high-speed calculating machine used to help break Nazi codes and a strong influence on Britain's later, and much more famous, ULTRA code breaking machine and efforts. With Nazi Germany's invasion and occupation of Poland in the fall of 1939 it seemed as if deciphering Enigma may have been fatally delayed. That said, many of Poland's world-class mathematicians and code breakers escaped to help aid the Allied cause. Though Allied code breakers from across Europe worked feverishly in England, the Allies still needed to acquire an intact Enigma machine if they were to be able to "hack" into (using today's vernacular) German communications. Over seventy years ago this month, on May 8, 1941, the Allies got their break when a German U-boat, U-110 , attacked an Allied convoy. During the ensuing battle the British escort destroyer HMS Bulldog (see image) skillfully drove U-110 to the surface. The German captain ordered the crew to abandon ship while he set the charges to scuttle his U-boat. The German captain had made a mistake however, and failed to guarantee the detonators for the charges actually worked. As a result, and as the crew disembarked into captivity, U-110 remained afloat with all her codebooks, charts, and most importantly – a complete Enigma machine. Over the following four hours, while operating from U-110’s slippery decks, and Bulldog’s tiny boarding boat, all in the icy waters off the Greenland coast, the British sailors transferred the valuable Enigma machine, codes and other documents into Allied hands. With this treasure trove of intelligence came the first big intelligence coup for the Allies; one that would play a key role in ultimate Allied victory.
Democracy is the government of the people, by the people and for the people (as Abraham Lincoln defined it). It was invented by the ancient Greeks and has spread widely in western nations since the English Civil War in the mid 17th century. Democracy means the people rule and can do anything they want by majority rule. In the U.S. "super majorities" (over 50%) are required for many decisions, such as ratifying a treaty or passing a constitutional amendment. Republicanism means there is a system of rule of law, or inalienable rights, which no majority can vote out. The origin of the term is Greek, from the words demos (people) and kratos (strength). The defining characteristic of a democracy is that the citizenry of a nation are sovereign. Constitutional monarchies become democratic (as in Britain, Japan and Scandinavia) by narrowing the power of the monarch to ceremonial roles and letting an elected government rule. The United States Constitution guarantees a republican form of government. With elected executives (governor/president) and legislatures. In parliamentary democracies, the executive and legislative branches are not separate. In them parties form winning coalitions and the coalition controls both the government and the parliament. In ancient Athens, male citizens (not women, children or slaves) were allowed to vote in the Assembly which made the laws of the city-state. Citizens did not elect representatives--they acted themselves. This form survives in "town meetings" in small New England villages. Debate over the merits of democracy While a direct democracy in the manner of the Athenians is probably infeasible in a modern nation-state comprised of millions of citizens, there continues to be debate among conservatives as to whether representative democracy is a desirable system. Jean-Jacques Rousseau, who could be considered the father of both modern democracy and totalitarianism, believed that in a democracy the government should be guided only by what he called the General Will. Other thinkers of the French Enlightenment described a republic as a system in which the law applies equally to the government and the people, a concept abbreviated in the 21st century by the phrase rule of law. Most contemporary thinkers would identify universal suffrage and the right of any citizen to argue for a change in existing law as desirable features of democracy; but there is some question as to whether the rule of law can be indefinitely sustained in a system with these features. Thomas Jefferson believed that it could so long as the people were educated, and thus devoted his post-presidency to founding the University of Virginia. Nonetheless, property continued to be a proxy for education throughout the antebellum period and Jefferson's home state of Virginia only abolished property qualifications for voting in 1851, years after any other state. Another (probably apocryphal) quote attributed to Alexander Fraser Tytler says that a republic can only last until the people find out that they can vote themselves public funds. This would be considered a welfare state, which most conservatives feel is an abuse of the U. S. Constitution's Necessary and Proper Clause. Democracy may have its problems; an uneducated populace, for example, is the bane of any democracy, since unless people know what they need, they can not properly elect people to serve these needs. Other problems emerge from a capricious electorate, which caused Winston Churchill to remark that, "Democracy is the worst form of government... except for all the other forms that have been tried," a quote which eloquently notes that, despite its problems, democracy almost universally provides for peace and prosperity. Majoritarian systems are, however, problematic in that, in theory, they permit a majority to debase the welfare of a minority. Socialists argue that a high degree of socio-economic equality is required for real political equality, but also contend that solidarity and fraternity may be sufficient to overcome the distorting effects of unequal wealth and enact pro-labor policies. Democracy may also create the illusion that truth belongs to the people, which is not the case because truth only belongs to God. It should be also noted that democracy allows un-Christian leaders to gain power in a normally Christian nation. Evolution of Anglo-American democracy In the United States, the current system of representative democracy evolved as a result of six major reforms: - The American Revolution which severed ties with the King of Great Britain and endowed the people with the sovereign prerogatives that formerly belonged to the monarchy. - The U. S. Constitution's creation of the House of Representatives, a Federal government entity elected directly by the citizens, and retention of federalism, the right of individual states to govern themselves and make their own laws. - The abolition of property qualifications for white male voters. This process, which was accomplished at the state level, took between 1820 and 1851 and is also known as Jacksonian Democracy. - The Fifteenth Amendment, which prevented the use of race as a restriction on voting. - The Seventeenth Amendment, which caused Senators to be elected directly by the citizens instead of the state legislatures. - The Nineteenth Amendment, which extended the franchise to women. Historically, the Fifteenth Amendment inspired the most opposition. Indeed, every state of the former Confederacy circumvented the Fifteenth Amendment between the end of Reconstruction and the passage of the Voting Rights Act of 1965, considered the ultimate guarantee of that Amendment. In Britain, there were six major reforms in the direction of democracy: - The 1832 Reform Act, which gave representation to the new industrial cities for the first time. - The 1867 Reform Act. - The 1884 Reform Act. - The 1911 Parliament Act, which ended the ability of the House of Lords to rein in spending by the Commons. - The 1918 Reform Act, which made suffrage universal among males and extended it to women 30 years of age or older. - The 1928 Reform Act, which equalized voting ages for male and female at 21. It is noteworthy that it took the British 84 years to complete the process of abolishing property qualifications for voting (the first three Acts lowered the threshold incrementally) and that they only began to do so after Jacksonian Democracy emerged in the United States. While none of these Acts has sustained a serious movement for repeal, historically the one which inspired the strongest opposition was the 1911 Parliament Act. Not surprisingly, this opposition came from the House of Lords, the last body of unelected people to retain any legislative power in the British system. Ultimately, the bill passed only after King George V promised to create an unlimited number of new peers in order to pass it. The United States, Britain, Canada and many other countries mostly utilize a first past the post system for elections. The highest vote-getter wins. This limits the number of political parties and helps to stabilize the system, in contrast to democracies which use proportional representation where a party winning 5% of the vote is guaranteed a seat. Italy, Germany, France, and Israel have at times had over 10 parties represented in their legislatures. - Dahl, Robert. On Democracy (2000) excerpt and text search, by a leading theorist - Dunn, John. Democracy: A History (2006) excerpt and text search - Held, David. Models of Democracy (3rd ed. (2006) excerpt and text search, useful textbook - Schumpeter, Joseph A. Capitalism, Socialism, and Democracy (1943) influential conservative interpretation excerpt and text search DEMOCRACY - That form of government in which the sovereign power resides in and is exercised by the whole body of free citizens directly or indirectly through a system of representation, as distinguished from monarchy, aristocracy, or oligarchy." - ↑ http://pajamasmedia.com/blog/america-a-republic-not-a-democracy/ - ↑ Black's Law Dictionary, Sixth Edition, P. 432
Mass statelessness emerged as a global phenomenon after the First World War, which broke the surviving dynastic empires of Europe and the Middle East and replaced them with nation-states. This course investigates the history and meaning of statelessness in the period from 1914 to 1945. The course begins with the peace settlements that followed the Great War, which left millions of displaced persons unable to return home, and turned millions more into unwanted minorities in newly formed nations. It asks how statelessness emerged as a `problem’, then looks at the various solutions that were attempted: national ones, such as the unprecedented population exchange between Greece and Turkey; imperial ones, imposed by Britain and France in the mandate territories of the Middle East; and international ones proposed by the newly-established League of Nations. It also looks at how and why governments actively used statelessness as a weapon against parts of their own population from the `White Russians’ who left the USSR after the Russian civil war, to the members of the Ottoman dynasty expelled by the new Republic of Turkey, to the German Jews denationalized by the Nazi regime. The course ends by examining the new crises of statelessness, and new attempts to solve them, that followed the Second World War.
(Lansing State Journal, Dec. 18, 1991) Question submitted by P.W. Anderson of East Lansing. Superconductivity describes a state that certain materials reach when cooled below a point called the critical temperature, Tc. Superconductivity is characterized by two main properties: absence of electrical resistivity and the Meissner effect. Superconductivity was discovered in 1911 by Kammerlingh Onnes. Before 1986 the highest critical temperature of a material was 30 Kelvin (-406 degrees F). That year, A. Bednorz and K.A. Muller discovered certain types of materials can have critical temperatures as high as 125 Kelvin (-235 degrees F). Scientists are able to easily cool things to these temperatures, even though they are extremely low. Liquid nitrogen, which has a temperature of 77 Kelvin (-321 degrees F), is available and not very expensive. Cooling things below liquid nitrogen temperatures is much more expensive and not practical if the materials are to be used in everyday applications. Because the critical temperature of these new materials are so much higher than those previously known, scientist refer to these materials as `high Tc Superconductors'. The first characteristic of a superconductor is that the resistively of the material goes to zero when the material is in the superconducting phase (i.e. cooled below Tc). In a normal material, the flow of electric current meets with resistance. The electrons which make up the electric current bounce off atoms and other electrons in the material, transforming part of the current into heat. This loss of current is not always desirable; the electric power companies lose a lot of money because of heat lost in the long lines which transport electricity from the generating plant to the consumer. Also, electric devices such as computer chips do not tolerate heat very well, and this intolerance places a limit on how small the chips can be made. Superconducting materials, which have zero resistance, do not generate heat and can therefore be used to solve these problems. The second characteristic of superconductors is call the Meissner effect. Ordinary (non-superconducting) magnets have two poles, which we call north and south. Two south poles (or two north poles) will repel each other, while two unlike poles will attract. A superconductor will repel both north and south poles. This effect can be taken advantage of to levitate and propel trains. Such a train is already being tested in Japan. The development of materials which become superconducting at higher Tc's promise to have continued impact on our lives. The two main characteristics of superconductivity - the lack of resistance and the Meissner effect - allow superconductors to replace normal materials and eliminate some of the problems which limit current technology.
The exact location of Curiosity on the surface of Mars is determined using data transmitted from its antennas as well as the space probes that orbit the red planet. It is very unlikely that these systems would fail but in such an eventuality there would be an alternative for determining the location of the rover: 'ask it' what eclipses it sees. "Observing these events offers an independent method for determining the coordinates of Curiosity," explains Gonzalo Barderas, researcher at the Complutense University of Madrid (UCM) and coauthor of the study. For this method to be used the robot must have a camera or sensor capable of sending data about an eclipse. "It could prove especially useful when there is no direct communication with Earth that allows for estimation of its position using radiometric dating or images provided by orbiters," outlines the researcher. The initial objective of the UCM group was to create a mathematical tool for predicting Phobos eclipses from the surface of Mars. But their method also proved useful in locating the precise location of any spacecraft that are also capable of observing eclipses from there. The details have been published in the 'Monthly Notices of the Royal Astronomical Society' journal. The model predicted partial eclipses that took place on the 13 and 17 September. The MastCam camera that Curiosity carries in its mast captured them without any problems. The Spanish REMS instrument, namely the vehicle's environmental station, also detected a reduction in ultraviolet solar radiation during the eclipses (5% in the first case). The initial simulations and the real end images coincided with a precision of one second. In order to make their calculations, the scientists considered the initial predicted landing area for Curiosity: an ellipse of 7 x 20 km2. In addition, with just two minutes of observations and using the start and end times of Phobos' contact with the Sun, error can be reduced in the rover coordinates from an order of magnitude of kilometres to another of metres. According to the model, the next movements of the Martian moon will take place between the 13 and 20 August 2013 and between the 3 and 8 August 2014. Curiosity will have the chance to observe eclipses again and the Spanish scientists will be able to confirm the validity of their tool. "In any case, this method can be applied to other space probes operating on the surface of Mars that have the ability to make optical observations or that have instruments that measure solar radiation," outlines Luis Vázquez, one of the authors. In fact, under the scientific management of Vázquez, this study forms part of a Spanish project associated to the joint Russian, Spanish and Finnish MetNet mission to distribute small meteorological stations across Mars. The project is called the Mars Environmental Instrumentation for Ground and Atmosphere (MEIGA). Its aim is to place different sensors on the red planet, including those involving solar radiation that can detect eclipses. G. Barderas, P. Romero, L. Vázquez, J. L. Vazquez-Poletti, I. M. Llorente. "Opportunities to observe solar eclipses by Phobos with the Mars Science Laboratory". Monthly Notices of the Royal Astronomical Society 426 (4): 3195-3200, October 2012. Doi: 10.1111/j.1365-2966.2012.21939.x. More articles from Physics and Astronomy: Researchers Solve Mystery of X-Ray Light From Black Holes 18.06.2013 | Johns Hopkins Hubble Uncovers Evidence for Extrasolar Planet Under Construction 17.06.2013 | Space Telescope Science Institute (STScI) ... two engines aircraft project “Elektro E6”. The countdown has been started for opening the gates again for the worldwide leading aviation and space event in Le Bourget, Paris from June 17th - 23rd, 2013. EADCO & PC-Aero will present at the Paris Air Show in Hall H4 booth F-7 their new future aircraft and innovative project: ... Siemens scientists have developed new kinds of ceramics in which they can embed transformers. The new development allows power supply transformers to be reduced to one fifth of their current size so that the normally separate switched-mode power supply units of light-emitting diodes can be integrated into the module's heat sink. The new technology was developed in cooperation with industrial and research partners who ... Cheaper clean-energy technologies could be made possible thanks to a new discovery. Led by Raymond Schaak, a professor of chemistry at Penn State University, research team members have found that an important chemical reaction that generates hydrogen from water is effectively triggered -- or catalyzed -- by a nanoparticle composed of nickel and phosphorus, two inexpensive elements that are abundant on Earth. ... The Fraunhofer Institute for Laser Technology ILT generated a lot of interest at the LASER World of Photonics 2013 trade fair with its numerous industrial laser technology innovations. Its highlights included beam sources and manufacturing processes for ultrashort laser pulses as well as ways to systematically optimize machining processes using computer simulations. There was even a specialist booth at the fair dedicated to the revolutionary technological potential of digital photonic production. Now in its fortieth year, LASER World ... It's not reruns of "The Jetsons", but researchers working at the National Institute of Standards and Technology (NIST) have developed a new microscopy technique that uses a process similar to how an old tube television produces a picture—cathodoluminescence—to image nanoscale features. Combining the best features of optical and scanning electron microscopy, the fast, versatile, and high-resolution technique allows scientists to view surface and subsurface features potentially as small as 10 nanometers in size. The new microscopy technique, described in the journal AIP Advances,* uses a beam of electrons to excite a specially ... 18.06.2013 | Materials Sciences 18.06.2013 | Health and Medicine 18.06.2013 | Life Sciences 14.06.2013 | Event News 13.06.2013 | Event News 10.06.2013 | Event News
Stay informed with daily news and our newsletters!Learn more Level(s): Grades 11 - 12 Author: This unit was created by Carol Wells as part of a Media Education course taught by John Pungente at the Faculty of Education, University of Toronto, 1993. This lesson introduces students to the phenomenon of the“blockbuster” movie – its history,characteristics and influences. Students will also explore the role of audience in the creation of a “blockbuster” and analyze their own responses to current blockbuster films. Students will learn about the process involved in turning a film into a blockbuster by devising promotional campaigns for an imaginary movie. This lesson and all associated documents (handouts, overheads, backgrounders) are available in an easy-print, pdf kit version. Interested in supporting MediaSmarts?Charitable Registration No. 89018 1092 RR0001 Find out how you can get involved.Learn more
Reviewed by Rita Hoots Those minuscule, invisible, whirling electrons possess amazing powers that enable them to bind elements together, however, their abstract imperceptibility sometimes makes their functions difficult for students to comprehend. In this straightforward presentation on chemical bonding, the narrative and illustrations clarify and contrast the characteristics of ionic, covalent, and metallic bonding. Using the periodic table, the elements are distinguished according to their electron patterns. Repeatedly citing Gilbert Lewis’ Octet rule from 1918, the different patterns of bonding in ionic compounds, metals, and covalent substances are explained and visually displayed. The topic of electronegativity is explored in the section of ionic bonds, delocalized valence electrons are dealt with in metallic bonding, and polarity is explained under covalent bonding. The physical characteristics of the three compound groups are adequately covered. Graphics continue to elucidate the energy levels or electron shells along with orbitals in the fourth or extra section. This is a helpful guide for the beginning student at the secondary level or in a beginner’s chemistry class who requires reinforcement and or clarification of chemical bonding, and interpretation of the placement of elements on the periodic table. In addition, the video is packaged with program support notes for both the student and teacher. Review posted on 1/16/2013
The Cell Project: Clay Models Building Models Cells with Clay In the Clay Cell activity, students build 3-d models of cells, organelle by organelle. Once the clay dries, the cells are sliced and the resulting cross-sections are examined. This activity was developed with Crayola's Model Magic. It works well for mixing colors and dries into a light, foam-like material. Note: Model Magic dries out quickly and cannot be re-used by adding water. Open only what you will use immediately. If you are going to mix colors ahead of time, be sure to keep it in air-tight containers. Choose which color to use for each organelle. The organelle models can be as detailed or general as you have time for. "wrapping cytoplasm" & Assembling the Cell Wrap a thin layer of "cytoplasm" around each organelle. The cytoplasm color should not be used in any of the organelles. Assemble your cell from the organelles, with the nucleus in the center, the endoplasmic reticulum around the nucleus, and so on. If the clay is too dry to stick together, use a thin layer of "cytoplasm" between each organelle. Instead of wrapping individual free-floating ribosomes, try adding them to the cell and covering with a thin layer of "cytoplasm". *Be sure to remind your students that real cytoplasm is a jelly-like substance that the organelles "float" in, and that we are wrapping cytoplasm this way for the sake of constructing our model. Wrapping cell membrane The last step in building the cell, is to wrap the cell membrane around the organelles. If you are building a plant cell, you will need an additional step of adding the cell wall. If you build your organelles and assemble the model in one session, it could take a week or more for the model to dry enough to get clean sections. Slice and examine sections If you slice while still wet, you can use dental floss or similar string. The resulting sections are a little smeared, but you can look at them immediately. You might want to do this as an example during class. Once dry, the clay is too tough to be sliced with string, but a sharp knife will work fine. The drier the clay, and the more carefully you slice, the cleaner the sections will be. If you have time, you can have the students draw cell diagrams from their collected cross-sections. How many sections do they need to see examples of all the organelles in a cell? How many models does the class need to make to get representations of all the organelles? The Cell Project on Flickr Have you, or your students, built model cells? We've set up a group on Flickr, for sharing photos of cell models. Join us! History of the project: The Cell Project began as part of Frances Segal's internship at GalaxyGoo, and as part of her Master's thesis. Ms Segal collaborated with middle-school teachers and GalaxyGoo developers to develop a learning tool focused on the life sciences for 7th grade students. Since then, the project has grown and developed into an expanding set of activities focused on learning about the basic unit of biology: the cell. In February of 2007, GalaxyGoo took the Clay Cell activity to the Family Science Days (a family-focused event during the 2007 national meeting of AAAS). Invitation to teachers:
Geiger and Marsden 1907 Rutherford left McGill University in Canada and moved to the University of Manchester, where he began work with Hans Geiger. continued to investigate the scattering of alpha particles. The particles came from a small sample of radon-222, a radioactive gas produced when the element radium decays. These particles were directed through a vacuum and on to a foil which caused the scattering. The positions of the scattered particles were seen as scintillations, or small flashes of light, which were detected using a microscope that could be rotated around the foil. experiment was quite difficult. Observations could only be taken in a dark laboratory, and it took about half an hour for the observer's eyes to adjust to seeing the traces. Each observer could only count accurately for about a minute before they needed to swap over. Geiger later developed the 'Geiger counter', which could count these pulses automatically in normal light. It made similar experiments much easier! alpha particle scattering was usually only one or two degrees. In 1909 Geiger needed an experiment for a research student, Ernest Marsden. Rutherford suggested that Marsden look for alpha-particle scattering at large angles. He didn't think it very likely.
The Moon has a stabilizing effect on Earth. Scientists have pondered whether such a large moon is also necessary for complex life. Image credit: NASA Scientists have long believed that, without our moon, the tilt of the Earth would shift greatly over time, from zero degrees, where the Sun remains over the equator, to 85 degrees, where the Sun shines almost directly above one of the poles. A planet’s stability has an effect on the development of life. A planet see-sawing back and forth on its axis as it orbits the Sun would experience wide fluctuations in climate, which then could potentially affect the evolution of complex life. However, new simulations show that, even without a moon, the tilt of Earth's axis - known as its obliquity - would vary only about ten degrees. The influence of other planets in the solar system could have kept a moonless Earth stable. The stabilizing effect that our large moon has on Earth's rotation therefore may not be as crucial for life as previously believed, according to a paper by Jason Barnes of the University of Idaho and colleagues which was presented at a recent meeting of the American Astronomical Society. The new research also suggests that moons are not needed for other planets in the universe to be potentially habitable. As the World Turns The image shows Earth’s axial tilt (or obliquity), rotation axis, plane of orbit, celestial equator and ecliptic. Earth is shown as viewed from the Sun; the orbit direction is counter-clockwise (to the left). Image credit: Wikicommons/Dna webmaster Due to the gravitational pull of its star, the axis of a planet rotates like a child's top over tens of thousands of years. Although the center of gravity remains constant, the direction of the tilt moves over time, or precesses. While Earth's moon does provide some stability, the new data reveals that the pull of other planets orbiting the Sun - especially Jupiter - would keep Earth from swinging too wildly, despite its chaotic evolution. Similarly, a planet's orbital plane also precesses. When the two are in synch, the combination can cause the total obliquity of the planet to swing chaotically. But the gravity of Earth's moon has been shown to provide a stabilizing effect. By speeding up Earth's rotational precession and keeping it out of synch with the precession of Earth's orbit, it minimizes fluctuations, creating a more stable system. As terrestrial moons go, Earth's moon is on the large size - only about a hundred times smaller than its parent planet. In comparison, Mars is over 60 million times more massive than its largest moon, Phobos. The difference is substantial, and with good cause - while the Martian moons appear to be captured asteroids, scientists think that Earth's moon formed when a Mars-sized body crashed into the young planet, blowing out pieces that later consolidated as the lunar satellite - a satellite which affects the planet's tilt. Scientists estimate that only one percent of any terrestrial planets will have a substantial moon. This means that most such planets are expected to experience massive changes in their obliquity. The Pull of Planets "Because Jupiter is the most massive, it really defines the average plane of the solar system," said Barnes. Without a moon, Barnes and his collaborators have determined that Earth's obliquity would only vary ten to twenty degrees over a half a billion years. That doesn't sound like much, but the changes of one to two degrees the planet presently exhibits are thought to be partly responsible for the Ice Ages. According to Barnes, the present shift is "a small effect, but in combination with Earth's present climate, it causes big changes." Still, a ten degree change is not a huge problem when it comes to life. "(It) would have effects, but not preclude the development of large scale, intelligent life." Furthermore, if Jupiter were closer, Barnes explains, the Earth's orbit would precess faster, and the moon would actually make the planet fluctuate more wildly, rather than less. "A moon can be stabilizing or destabilizing, depending on what's going on in the rest of the system," he said. The Benefit of a Back Spin The team also determined that planets with retrograde, or backward motion should have smaller variations than those that spin in the same direction as their parent star, a large moon notwithstanding. Moons in our solar system, with Earth for scale. Image credit: NASA "We think the initial rotation direction should be random," Barnes said. "If it is, half the planets out there would not have problems with obliquity variations." What determines which way a planet spins? He suspects that "whatever smacks the planet last establishes its rotation rate." A 50/50 shot at retrograde precession, combined with the likelihood of other planets in the system keeping the planet from tipping on its side, means more terrestrial planets could be potentially habitable. Barnes ventured an estimate that at least 75 percent of the rocky planets in the habitable zone may be stable enough for life to evolve, though he notes that additional studies are needed to confirm or disprove that. In comparison, the previous idea that a large moon was necessary for a constant tilt meant that only about 1 percent of terrestrial planets would have a steady climate. "A large moon can stabilize (a planet)," Barnes said, "but in most cases, it's not needed."
June 1, 2011 University of Michigan astronomers examined old galaxies and were surprised to discover that they are still making new stars. The results provide insights into how galaxies evolve with time. U-M research fellow Alyson Ford and astronomy professor Joel Bregman presented their findings May 31 at a meeting of the Canadian Astronomical Society in London, Ontario. Using the Wide Field Camera 3 on the Hubble Space Telescope, they saw individual young stars and star clusters in four galaxies that are about 40 million light years away. One light year is about 5.9 trillion miles. "Scientists thought these were dead galaxies that had finished making stars a long time ago," Ford said. "But we've shown that they are still alive and are forming stars at a fairly low level." Galaxies generally come in two types: spiral galaxies, like our own Milky Way, and elliptical galaxies. The stars in spiral galaxies lie in a disk that also contains cold, dense gas, from which new stars are regularly formed at a rate of about one sun per year. Stars in elliptical galaxies, on the other hand, are nearly all billions of years old. These galaxies contain stars that orbit every which way, like bees around a beehive. Ellipticals have little, if any, cold gas, and no star formation was known. "Astronomers previously studied star formation by looking at all of the light from an elliptical galaxy at once, because we usually can't see individual stars," Ford said. "Our trick is to make sensitive ultraviolet images with the Hubble Space Telescope, which allows us to see individual stars." The technique enabled the astronomers to observe star formation, even if it is as little as one sun every 100,000 years. Ford and Bregman are working to understand the stellar birth rate and likelihood of stars forming in groups within ellipticals. In the Milky Way, stars usually form in associations containing from tens to 100,000 stars. In elliptical galaxies, conditions are different because there is no disk of cold material to form stars. "We were confused by some of the colors of objects in our images until we realized that they must be star clusters, so most of the star formation happens in associations," Ford said. The team's breakthrough came when they observed Messier 105, a normal elliptical galaxy that is 34 million light years away, in the constellation Leo. Though there had been no previous indication of star formation in Messier 105, Ford and Bregman saw a few bright, very blue stars, resembling a single star 10 to 20 times the mass of the sun. They also saw objects that aren't blue enough to be single stars, but instead are clusters of many stars. When accounting for these clusters, stars are forming in Messier 105 at an average rate of one sun every 10,000 years, Ford and Bregman concluded. "This is not just a burst of star formation but a continuous process," Ford said. These findings raise new mysteries, such as the origin of the gas that forms the stars. "We're at the beginning of a new line of research, which is very exciting, but at times confusing," Bregman said. "We hope to follow up this discovery with new observations that will really give us insight into the process of star formation in these 'dead' galaxies." Other social bookmarking and sharing tools: Note: If no author is given, the source is cited instead.
Harriet Tubman is well known for risking her life as a “conductor” in the Underground Railroad, which led escaped slaves to freedom in the North. But did you know that the former slave also served as a spy for the Union during the Civil War and was the first woman in American history to lead a military expedition? During a time when women were usually restricted to traditional roles like cooking and nursing, she did her share of those jobs. But she also worked side-by-side with men, says writer Tom Allen, who tells her exciting story in the National Geographic book, Harriet Tubman, Secret Agent. Tubman decided to help the Union Army because she wanted freedom for all of the people who were forced into slavery, not just the few she could help by herself. And she convinced many other brave African Americans to join her as spies, even at the risk of being hanged if they were caught. In one of her most dramatic and dangerous roles, Tubman helped Colonel James Montgomery plan a raid to free slaves from plantations along the Combahee (pronounced “KUM-bee”) River in South Carolina. Early on the morning of June 1, 1863, three gunboats carrying several hundred male soldiers along with Harriet Tubman set out on their mission. Tubman had gathered key information from her scouts about the Confederate positions. She knew where they were hiding along the shore. She also found out where they had placed torpedoes, or barrels filled with gunpowder, in the water. As the early morning fog lifted on some of the South’s most important rice plantations, the Union expedition hit hard. The raiders set fire to buildings and destroyed bridges, so they couldn’t be used by the Confederate Army. They also freed about 750 slaves—men, women, children, and babies—and did not lose one soldier in the attack. Allen, who writes about this adventure and many others, got to know Tubman well through the months of research he did for the book. The historic details he shares bring Tubman and many other important figures of her time to life. To gather the facts, Allen searched libraries and the Internet, and even walked in Tubman’s footsteps. “I went on the river just south of the area where the raid took place,” he says. “You are in that kind of country she would have known, with plenty of mosquitoes and snakes, and there are still dirt roads there today—so you get a feeling of what it was like.” Allen says his most exciting moment came when a librarian led him to written accounts by people who actually saw Tubman and the raiders in action. “She was five feet two inches (157 centimeters) tall, born a slave, had a debilitating illness, and was unable to read or write. Yet here was this tough woman who could take charge and lead men. Put all that together and you get Harriet Tubman. I got to like her pretty quickly because of her strength and her spirit,” Allen says. To find out more about this courageous and adventuresome woman, read the book, Harriet Tubman, Secret Agent.
Lasers emit highly concentrated, amplified light. Usually it takes a complex array of crystals, gels or gases to amplify light particles, known as photons, as they bounce around between mirrors inside laser machines. But now scientists have found another way: using engineered living cells that can perform the feat. The project took place at the Wellman Center for Photomedicine in Massachusetts. The key to this breakthrough involved the use of the widely studied protein known as green fluorescent protein (GFP). This protein, which was first discovered in jellyfish, has (as the name implies) the property of generating light. In an article published in Nature Photonics, researchers Malte Gather and Seok Hyun Yun describe how a solution made from GFP was used in combination with a mirrored chamber to create a laser. From this preliminary test, Gather and Yun were able to determine how much GFP was required to create the laser light. Using this result, they then moved ahead to genetically engineer mammalian cells that could express the GFP at the required levels. The researchers report that they were able to create bright laser pulses that lasted a few nanoseconds with a single cell. Amazingly the cells were not damaged during the production of the laser light but were able to withstand hundreds of pulses. Furthermore, the spherical shape of the cell itself acted as a lens “refocusing the light and inducing emission of laser light at lower energy levels than required for the solution-based device.” Although there are no immediate plans to use this technology, the erosion of the barrier between optical technologies and biology could open many doors in therapy and research. Gather tells PhysOrg.com that they “hope to be able to implant a structure equivalent to the mirrored chamber right into a cell, which would the next milestone in this research." Credit: Nature Photonics and Malte Gather, Wellman Center for Photomedicine, Mass. General Hospital
(Phys.org)—In cold regions on earth, negative temperatures on the Fahrenheit or Celsius scale can often occur in winter; in physics, however, they were so far impossible. On the absolute temperature scale that is used by physicists and also called Kelvin scale, one cannot go below zero – at least not in the sense of getting colder than zero Kelvin. According to the physical meaning of temperature, the temperature of a gas is determined by the chaotic movement of its particles – the colder the gas, the slower the particles. At zero Kelvin (-460°F or -273°C) the particles stop moving and all disorder disappears. Thus, nothing can be colder than absolute zero on the Kelvin scale. Physicists of the Ludwig-Maximilians University Munich and the Max Planck Institute of Quantum Optics in Garching have now created an atomic gas in the lab that has nonetheless negative Kelvin values (Science, Jan 4, 2013). These negative absolute temperatures lead to several striking consequences: Although the atoms in the gas attract each other and give rise to a negative pressure, the gas does not collapse – a behavior that is also postulated for dark energy in cosmology. Also supposedly impossible heat engines can be realized with the help of negative absolute temperatures, such as an engine with a thermodynamic efficiency above 100%. In order to bring water to the boil, energy needs to be added to the water. During heating up, the water molecules increase their kinetic energy over time and move faster on average. Yet, the individual molecules possess different kinetic energies – from very slow to very fast. In thermal equilibrium, low-energy states are more likely than high-energy states, i.e. only a few particles move really fast. In physics, this distribution is called Boltzmann distribution. Physicists around Ulrich Schneider and Immanuel Bloch have now realized a gas in which this distribution is exactly inverted: Many particles possess large energies and only a few have small energies. This inversion of the energy distribution means that the particles have assumed a negative absolute temperature. "The inverted Boltzmann distribution is the hallmark of negative absolute temperature; and this is what we have achieved," says Ulrich Schneider. "Yet the gas is not colder than zero Kelvin, but hotter. It is even hotter than at any positive temperature – the temperature scale simply does not end at infinity, but jumps to negative values instead." The underlying principle can best be visualized with an illustration (see Fig. 1): If one starts at positive temperatures (left figure) and increases the total energy of the balls by heating them up, the balls will also spread into regions of high energy. If one heated the balls to infinite temperature (central figure), each point in the landscape would be equally probable, irrespective of its energy. If one could add even more energy and thereby heat the balls even further, the balls would preferably gather at high-energy states (right figure) and would be even hotter than at infinite temperature. The Boltzmann distribution would be inverted, and the temperature therefore negative. At first sight it may sound strange that a negative absolute temperature is hotter than a positive one. This is, however, simply a consequence of the historic definition of absolute temperature; if it were defined differently, this apparent contradiction would not exist. This inversion of the population of energy states is not possible in water or any other natural system with moving particles, as the system would need to absorb an infinite amount of energy – an impossible feat! However, if the system possesses an upper bound for the energy of the individual particles, such as the top of the hill for the potential energy in Fig. 1, the situation will be completely different. The researchers around Immanuel Bloch and Ulrich Schneider have now realized a gas of atoms possessing such an upper energy limit in their laboratory, following theoretical proposals by Allard Mosk and Achim Rosch. In their experiment, the scientists first cool around a hundred thousand atoms in a vacuum chamber to a positive temperature of a few billionth Kelvin and capture them in optical traps made of laser beams. The surrounding ultrahigh vacuum guarantees that the atoms are perfectly isolated from the environment. The laser beams create a so-called optical lattice, in which the atoms are trapped in a perfectly ordered array of millions of bright light spots emerging from the interference between the laser beams. In this lattice, the atoms can still move from site to site via tunneling, yet their kinetic energy is limited from above and therefore possesses the required upper energy bound. Temperature, however, relates not only to kinetic energy, but the total energy of the particles, which in this case includes interaction and potential energy. The system of the Munich and Garching researchers also sets a limit to both of these. By suddenly turning a valley into a hill (and changing the interaction), the physicists then take the atoms to the upper boundary of the total energy – thus realizing a negative temperature, at minus a few billionth Kelvin. "If balls possess a positive temperature and lie in a valley at minimum potential energy, this state will apparently be stable – this is nature as we know it. If the balls are located on top of a hill at maximum potential energy, they will usually roll down and thereby convert their potential energy into kinetic energy. If the balls are, however, at negative temperature, their kinetic energy will already be so large that it cannot increase further. Therefore the balls cannot roll down and stay on top of the hill. The energy boundary therefore renders the system stable!" explains Simon Braun, PhD student in the group. The negative temperature state in their experiment is indeed just as stable as a positive temperature state. "That way, we have created the first negative absolute temperature state for moving particles," he adds. Matter at negative absolute temperature leads to a whole bunch of astounding consequences: With its help, one could create heat engines with an efficiency above 100%. This does not mean that the law of energy conservation is violated. Instead, the machine could not only absorb energy from the hotter substance, but, in contrast to the usual case, also from the colder. The work performed by the engine could therefore be larger than the energy taken from the hotter substance alone. The achievement of the Munich physicists could additionally be interesting for cosmology. Concerning its thermodynamic behavior, negative temperature states exhibit parallels to the so-called dark energy. Cosmologists postulate dark energy as the elusive force that accelerates the expansion of the universe, although the cosmos should in fact contract because of the gravitational attraction between all masses. There is a similar phenomenon in the atomic cloud in the Munich lab: The experiment relies upon the fact that the atoms in the gas do not repel each other as in a usual gas, but instead interact attractively. This means that the atoms exert a negative instead of a positive pressure. As a consequence, the atom cloud wants to contract and should usually collapse – just as is expected for the universe under the influence of gravity. But because of its negative temperature this does not happen. The gas is saved from collapse just like the universe. Explore further: Hotter might be better at energy-intensive data centers More information: Braun, S. et al., Atoms at negative absolute temperature - the hottest systems in the world, Science, 4 January 2013. www.sciencemag.org/content/339/6115/52
- High School - Number & Quantity - Statistics & Probability - Language Arts - Social Studies - Art & Music - World Languages - Your Life Solve Problems Involving Measurement And Conversion Of Measurements From A Larger Unit To A Smaller Unit. 4.MD.1Know relative sizes of measurement u...more Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a larger unit in terms of a smaller unit. Record measurement equivalents in a two- column table. For example, know that 1 ft is 12 times as long as 1 in. Express the length of a 4 ft snake as 48 in. Generate a conversion table for feet and inches listing the number pairs (1, 12), (2, 24), (3, 36), ... 4.MD.2Use the four operations to solve wor...more Use the four operations to solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money, including problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale. 4.MD.3Apply the area and perimeter formula...more Apply the area and perimeter formulas for rectangles in real world and mathematical problems. For example, find the width of a rectangular room given the area of the flooring and the length, by viewing the area formula as a multiplication equation with an unknown factor. Represent And Interpret Data. 4.MD.4Make a line plot to display a data s...more Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection. Geometric Measurement: Understand Concepts Of Angle And Measure Angles. 4.MD.5Recognize angles as geometric shapes...more Recognize angles as geometric shapes that are formed wherever two rays share a common endpoint, and understand concepts of angle measurement: 4.MD.5.aAn angle is measured with reference ...more An angle is measured with reference to a circle with its center at the common endpoint of the rays, by considering the fraction of the circular arc between the points where the two rays intersect the circle. An angle that turns through 1/360 of a circle is called a “one-degree angle,” and can be used to measure angles. 4.MD.5.bAn angle that turns through n one-de...more An angle that turns through n one-degree angles is said to have an angle measure of n degrees. 4.MD.6Measure angles in whole-number degre...more Measure angles in whole-number degrees using a protractor. Sketch angles of specified measure. 4.MD.7Recognize angle measure as additive....more Recognize angle measure as additive. When an angle is decomposed into non-overlapping parts, the angle measure of the whole is the sum of the angle measures of the parts. Solve addition and subtraction problems to find unknown angles on a diagram in real world and mathematical problems, e.g., by using an equation with a symbol for the unknown angle measure. Major cluster will be a majority of the assessment, Supporting clusters will be assessed through their success at supporting the Major Clusters and Additional Clusters will be assessed as well. The assessments will strongly focus where the standards strongly focus. Now Creating a New Plan You'll be able to add text, files and other info to meet your student's needs. You'll be redirected to your new page in just a second. Moving Games. Just a moment...
Strip mining is most often used on the plains. Large equipment, such as draglines, removes the overlying soil and rock (overburden) to expose a relatively horizontal coal seam. Overburden is placed in piles behind the area being mined. When extraction is complete, the overburden is replaced and the land is returned as much as possible to its former state. Strip mining is used on the prairies where coal seams are fairly horizontal and close to the surface. Large machinery exposes and extracts the coal in a series of rows called strips. In the early days of strip mining there arose a prejudice in some circles over the use of strip-produced coal. It was felt that strip-mined coal was not as good as underground coal. There was no basis to this, however, as the only difference between these coals is the method of getting at it. Stripping techniques are actually less costly, and may produce a lower-cost fuel to homes and industries throughout Alberta and Canada.
Language and land are intricately connected. Indeed, languages and dialects tend to get their names from the regions where they are spoken. What happens, then, when a people has no land of its own? For most of Jewish history, this was the linguistic situation of the Jews. Aside for a few hundred years during the first and second millennium B.C.E. and the past half-century in modern Israel, Jews have not had a homeland, and thus instead of speaking a single language, they have spoken many. Hebrew is the language of the Bible and of traditional Jewish liturgy. As such, it is integrally connected with the Jewish religion. The rabbis attributed theological significance to the Hebrew language. Rabbinic literature refers to Hebrew as lashon ha-kodesh, the holy language. In addition, Hebrew was thought to be the language of God and the angels, as well as the original language of all humanity. It is unclear when the Israelites began using Hebrew, but the earliest Hebrew texts date from the end of the second millennium B.C.E. However, Hebrew’s primacy as a spoken language began to diminish following the destruction of the First Temple in 587 B.C.E. and the exile that followed. In the Middle Ages, Hebrew was used primarily for ritual and religious purposes. During the Jewish Enlightenment of the 18th and 19th centuries, however, Jews adopted Hebrew as a secular language. Hebrew newspapers and novels began to emerge, and a number of scholars took up the task of transforming Hebrew into a modern spoken tongue. With the rise of Zionism this endeavor gained political and practical relevance. Today Modern Hebrew is the official language of the State of Israel.
Scientists have discovered how bacteriophages - viruses that infect bacteria - manage to pierce the bacterial membrane: with an iron spike. When they crystallized this smaller protein fragment, the x-rays were finally able to resolve its structure, and from this the team had the very first picture of the tip of the spike: a single iron atom held in place by six amino acids, forming a sharp needlelike tip—perfectly suited for piercing the outer membranes of bacteria. The team reports its findings this month in Structure. Scientists had always assumed that when phages drill their way through the outer membrane, they first have to soften it up a bit in some way, says Mark van Raaij, a biologist and virus expert at the Instituto de Biologia Molecular de Barcelona in Spain, who was not involved in the work. But the discovery of the sharp iron needle, he says, suggests that P2 and F92 don't need any help. "It's like driving a nail or stake through the membrane of the bacteria."
Learning Math: Measurement A course for elementary and middle school teachers, examines some of the major ideas in measurement. You will explore procedures for measuring and learn about standard units in the metric and customary systems, the relationships among units, and the approximate nature of measurement. You will also examine how measurement can illuminate mathematical concepts such as irrational numbers, properties of circles, and area and volume formulas, and discover how other mathematical concepts can inform measurement tasks such as indirect measurement. 1. What Does It Mean To Measure?—Explore what can be measured and what it means to measure. Identify measurable properties such as weight, surface area, and volume, and discuss which metric units are more appropriate for measuring these properties. 2. Measurement Fundamentals—Investigate the difference between a count and a measure, and examine essential ideas such as unit iteration, partitioning, and the compensatory principle. Learn about the many uses of ratio in measurement and how scale models help us understand relative sizes. 3. The Metric System—Learn about the relationships between units in the metric system and how to represent quantities using different units. Estimate and measure quantities of length, mass, and capacity, and solve measurement problems. 4. Angle Measurement—Review appropriate notation for angle measurement, and describe angles in terms of the amount of turn. Use reasoning to determine the measures of angles in polygons based on the idea that there are 360 degrees in a complete turn. Learn about the relationships among angles within shapes, and generalize a formula for finding the sum of the angles in any n-gon. 5. Indirect Measurement and Trigonometry—Learn how to use the concept of similarity to measure distance indirectly, using methods involving similar triangles, shadows, and transits. Apply basic right-angle trigonometry to learn about the relationships among steepness, angle of elevation, and height-to-distance ratio. 6. Area—Learn that area is a measure of how much surface is covered. Explore the relationship between the size of the unit used and the resulting measurement. Find the area of irregular shapes by counting squares or subdividing the figure into sections. 7. Circles and Pi—Investigate the circumference and area of a circle. Examine what underlies the formulas for these measures, and learn how the features of the irrational number pi affect both of these measures. 8. Volume—Explore several methods for finding the volume of objects, using both standard cubic units and non-standard measures. Explore how volume formulas for solid objects such as spheres, cylinders, and cones are derived and related. 9. Measurement Relationships—Examine the relationships between area and perimeter when one measure is fixed. Determine which shapes maximize area while minimizing perimeter, and vice versa. Explore the proportional relationship between surface area and volume. 10. Classroom Case Studies, K–2—Watch this program in the 10th session for K–2 teachers. Explore how the concepts developed in this course can be applied through case studies of K–2 teachers (former course participants who have adapted their new knowledge to their classrooms), as well as a set of typical measurement problems for K–2 students. 11. Classroom Case Studies, 3–5—Watch this program in the 10th session for grade 3–5 teachers. Explore how the concepts developed in this course can be applied through case studies of grade 3–5 teachers (former course participants who have adapted their new knowledge to their classrooms), as well as a set of typical measurement problems for grade 3–5 students. 12. Classroom Case Studies, 6–8—Watch this program in the 10th session for grade 6–8 teachers. Explore how the concepts developed in this course can be applied through case studies of grade 6–8 teachers (former course participants who have adapted their new knowledge to their classrooms), as well as a set of typical measurement problems for grade 6–8 students. Use our classroom videos for every curriculum and every grade level. These programs have been dropped from the Instructional Resources offerings. Sign up for our monthly e-Newsletter to learn about new programs, classroom resources, and professional development opportunities! June - Tri-City Technology Conference in Fargo, North Dakota June 8 - Share a Story event at Rheault Farm from 10:30 am to 4:00 pm June 14 - Midwest KidsFest at Island Park in Fargo, North Dakota from 11:00 am to 7:00 pm June 25-26 - Prairie Region Teacher Training Institute in Moorhead, Minnesota at Concordia College
Level A Student Workbook The aim of this series is to teach the student the fundamental principles of phonics in a logical sequence, along with consistent drill and repetition of these concepts to insure comprehension. These courses cover short and long vowels, consonants, and consonant blends, as well as essential word-recognition skills. The general plan of this workbook includes the introduction of phonetic principles in a logical sequence, along with a consistent dose of drill and repetition of these concepts to insure comprehension. Students are often directed to demonstrate their comprehension of lesson materials by way of written exercises. Ideally, students should be encouraged to complete most of these exercises by themselves. However, some students may become unnecessarily frustrated with the quantity of written work which appears throughout the book. Therefore, instructors should feel free to allow their students to complete some of their workbook lesson orally. Perhaps some of the written exercises can be completed orally by the student while the teacher fills in the student's answers. Instructors are encouraged to be sensitive to the individual capabilities of each of their students, especially in the area of handwriting development. - Type: Paperback () - Category: > Home Schooling - ISBN / UPC: 9781930092747/1930092741 - Publish Date: 1/1/1993 - Item No: 144922 - Vendor: Christian Liberty Press
Student's Corner - Nuclear Reactors There Are Two Types of Reactors in the United StatesThe Pressurized Water Reactor (PWR) Pressurized Water Reactors are known as "PWRs." They keep water under pressure so that it heats but does not boil. Water from the reactor and the water that is turned into steam are in separate pipes and never mix.And the Boiling Water Reactor (BWR) Boiling Water Reactors are known as "BWRs." In BWRs, the water heated by fission actually boils and turns into steam to turn the generator. In both types of plants, the steam is turned back into water and can be used again in the process. Radioactivity must be carefully managed because it can be dangerous if not handled properly. It can damage human cells or cause cancer over time. Since the fission process creates radioactivity, all nuclear power plants have many safety systems that protect workers, the public and the environment. For example, systems allow the fission process to be stopped and the reactor to be shut down quickly. Other systems cool the reactor and carry heat away from it. Barriers keep the radioactivity from escaping into the environment. In reactors, radioactive material is contained inside small ceramic pellets about the size of an adult's finger. They are placed in long metal rods inside a reactor vessel, which is enclosed in a concrete and steel containment building. These buildings have walls three to six feet thick!
Featured Image - (04/15/2008) Peek Crater: Potential Future Lunar Landing Site Peek crater is a small (diameter = 12 km) crater in the northern part of Mare Smythii within Smythii basin. Smythii basin is an ancient basin on the eastern limb of the Moon. The basalts filling this basin are thought to be very young, being only 1-2 billion years old. For this reason, among others, Peek crater is a high-priority target for direct human exploration when human lunar landing missions resume in the next decade. Figure 1: Contrast-enhanced segment of Apollo 15 metric mapping frame AS15-M-0346, showing the proposed human lunar landing site near Peek crater in northern Mare Smythii. (Apollo Image AS15-M-0346[NASA/JSC/Arizona State University]) Why do we care about young mare basalts? The composition of mare basalts is very important to lunar scientists because, like on Earth, the magmas that produce lunar basalts form deep within the Moon. Studying the chemical composition of mare basalts therefore gives us a window to the composition of the lunar interior. It turns out that while most of the lunar basalt samples collected by the Apollo explorers (and most lunar basalts in general) are about 3-4 billion years old, recent studies performed using Clementine and Lunar Orbiter data show that mare volcanism continued until about 1-2 billion years ago in some parts of the Moon, like Mare Smythii. Since the American Apollo and Soviet Luna landings only directly sampled a handful of places on the lunar surface, there a lot of basalt types on the lunar surface that we haven't investigated yet! One of the reasons why lunar scientists are so keen to send human explorers and their robotic scouts to some of the younger lunar basalts is to learn how the younger basalts are chemically different from the older basalts sampled during the Apollo missions. This will tell us about how the composition of the lunar interior changed over time and also help us to understand more about the thermal history of the Moon. We think that the Moon has now completely cooled and no longer has any volcanic activity. But when did this volcanic activity stop, and why? By studying young mare basalts like the ones near Peek crater, we can answer these questions and learn more about how terrestrial planets work! What led NASA to identify the Peek crater region as being a possible landing site for one of the next human lunar landings? There are two main reasons. First, Peek crater itself acts like a natural drill core into Mare Smythii. In principle, this will let future human explorers sample lava flows at different depths within Mare Smythii, which will greatly enhance the science that can be done at the site. Second, a vital goal of the current lunar exploration program is to take advantage of local lunar resources. This will make lunar explorers more self-sufficient and save money! NASA is interested in the Peek crater region because of the resources found there. The lunar regolith (the broken-up rocks and impact products that make up the first 10 m or so of the lunar surface) in this region is derived in part from the local iron-rich Smythii basin basalts. This iron-rich regolith material could be used for a variety of vital purposes, including the construction of human habitats, radiation shielding, or as feedstock for local resource utilization. Iron is an important industrial material on Earth, and it will be no less important for indigenous lunar industrial development. Many potential sites for future human lunar exploration have been identified by NASA, and these are high priority targets for the Lunar Reconnaissance Orbiter Camera (LROC). After the Lunar Reconnaissance Orbiter mission launches later this year, LROC will provide 0.5 m/pixel images of Peek crater and other potential lunar landing sites in order to help mission planners determine the safest places for the next generation of human lunar explorers to land.
MISSISSIPPI MOUND BUILDERS ANCIENT INDIAN CIVILIZATION Mississippi Mound Builders Mississippi Mound Builders were not limited to just the Mississippi River Valley. Ancient civilizations built mounds in a large area from the Great Lakes to the Gulf of Mexico and from the Mississippi River to the Appalachian Mountains, but the greatest concentrations of mounds are found in the Mississippi and Ohio River valleys. These included societies in the Archaic, and Woodland period, and Mississippian period. These Pre-Columbian mounds have been dated from roughly 3000 BCE to the 1500s, and most of these cultures lived in the Great Lakes region, the Ohio River region, and the Mississippi River region. However, there were also mound building cultures as far away as Florida. Once it was thought all the mounds were built by one great ancient civilization. We now know that many differet cultures contributed to the ancient mounds found on the North American continent. Archaeological research indicates the mounds of North America were built over a long period of time by very different types of societies, ranging from mobile hunter-gatherers to sedentary farmers. These prehistoric mounds had a wide variety of forms and fulfilled a range of functions. Many served as burial mounds. Others were temple mounds, built as platforms to hold religious structures. Burial mounds were especially common during the Middle Woodland period (c.100 B.C.–A.D. 400), while temple mounds predominated during the Mississippian period (after A.D. 1000). The earliest mounds in the United States have been found at Watson Brake near Monroe, Louisiana. they were built about 6,000 years ago. The purpose of these eleven mounds is unclear. The Archaic mound-building tradition culminated at the Poverty Point Site, in West Carroll Parish, Louisiana, between 1800 B.C. and 500 B.C. Six concentric ridges surround two large mounds, one of which reaches 65 ft (20 meters) high. During the Woodland period (c.500 B.C.–A.D. 1000), domestic crops of sunflowers, goosefoot, erect knot weed, and may grass were cultivated, allowing people to develop a greater degree of sedentism throughout the Ohio and Mississippi valleys. During the Middle Woodland period (c.200 B.C.–A.D. 400) elaborate earthworks were constructed from the Great Lakes to the Gulf Coast. Large, mainly dome-shaped mounds appeared throughout the Ohio and Tennessee river valleys, some in the form of animal effigies. In the Hopewell culture, centered in Southern Ohio and Illinois, earthen geometric enclosures defined areas ranging from 2.5 to 120 acres (1 to 50 hectares), and some mounds reached 65 ft (20 meters) in height. Mica, ceramic, shell, pipestone, and other material were traded over a vast area, indicating the growth of a system of widely shared religious beliefs but not overall political unity. Analysis of mortuary remains suggests Middle and Late Woodland communities were characterized by a system of social rank. Some kin groups are believed to have had high social prestige with preferential access to rare commodities, and control over positions of political leadership. Towards the end of the Late Woodland period (c.A.D. 400–1000), burial mounds decreased in frequency, and the elaborate burial goods of the Hopewell culture became very rare. In the Mississippian period (after A.D. 1000), maize was cultivated throughout the East. Populations expanded and became increasingly sedentary. At Cahokia Mounds (near East St. Louis, Illinois) the largest earthwork in North America was built, a temple mound measuring nearly 100 ft high (30 meters) and 975 ft long (300 meters). Many large ceremonial centers with temple mounds appeared throughout the South, especially in the Mississippi Valley. After 1200, a set of distinctive motifs spread throughout the Southeast, from Oklahoma to Northern Georgia, on a variety of media, including shells, ceramics, and pipestone. Elaborate ceremonial copper axes, gorgets and sheet copper plumes have also been found in this area. This complex of distinct motifs is called the Southern Cult. It is thought these items, along with the many temple mounds in existence in this area, indicate a regional religion shared by a large number of local cultures. Mississippian societies are thought to have been complex chiefdoms, the most hierarchical form of political organization to emerge in aboriginal North America. The Toltec Mounds is a group of earthworks, in the lower Mississippi Valley, constructed by the Indians that lived in the region durring the Middle Ages. Identification of the site with the Toltecs was a mistake. Hundreds, or perhaps thousands of mounds were built in the Mississippi Delta. Crews of workers labored over generations, sometimes a century or more, before an earthwork reached its final dimensions. Radiocarbon dating has shown that the decline in the Moundbuilder population began more than a century before Europeans arrived in the region. The decline of the Mississippian Indians' population is still a mystery. Cahokia Mounds mark Illinois' most ancient city Cahokia art exhibit showcases moundbuilders Once the most numerous peoples in Florida, the Ais are now extinct Role of Middle Mississippians in transfer of Late Woodland to Oneota culture
The Rapid Ascent of Basalt Magmas by Andrew A. Snelling, Ph.D. It is now well established that the earth's upper mantle is the source of the basalt magmas erupted by many volcanoes as lava flows1--for example, Kilauea Volcano, Hawaii. The earth's crust is predominantly of a granitic composition, whereas the mantle is closer to a basaltic composition. Pieces of mantle rock are often brought to the earth's surface in basalt lava flows. Other evidence also confirms that the basalt magmas are generated by partial melting of the upper mantle rock. Explosive Eruptions and Mantle Water Where such volcanic eruptions occur on the continents, the basalt magmas typically have to ascend some 60-80 kilometers (35-50 miles) from the upper mantle to the surface. Furthermore, the mechanisms by which magmas ascend and the rates of magma ascent are known to play a critical role in the dynamics of volcanic eruptions, but these phenomena have until now been poorly constrained. Critics of catastrophic Flood geology have thus used the presumed long, slow ascent of basalt magmas, and their uniformitarian extrapolation back into the past of the small volumes of basalt magmas delivered to the earth's surface today, to insist that many millions of years of eruptions would have been needed to produce the basalt lavas found in the geologic record. However, the ascent rate of the gas-rich, explosively erupted kimberlite magmas that host diamonds has previously been determined as four meters per second (about 787 feet per minute or nine miles per hour)!2 Such a rapid ascent rate is crucial to survival of the diamonds carried by these unusual magmas from 200-400 kilometers (125-250 miles) down in the mantle up to the earth's surface. A slower ascent rate would result in the diamonds turning to graphite. To put this ascent rate into perspective, it only takes between 12 and 30 hours for the diamond-carrying kimberlite magma to travel from 200-400 kilometers depth in the mantle up to erupt at the earth's surface (Figure 1). Small amounts of water have been found dissolved as hydrogen and hydroxyl ions (the dissociated components of water) in the minerals within fragments of mantle rocks (xenoliths) brought to the earth's surface in basalt magmas.3 Even those small amounts have major effects on physical and chemical processes in the mantle, also being critical to plate tectonics.4 Furthermore, experimental studies have shown that this water dissolved in mantle minerals would likely be partially lost during transport to the earth's surface, being partitioned into the ascending magma.5 Consequently, measuring the water still dissolved in such minerals within xenoliths in erupted basalts could provide clues to quantifying magma ascent rates prior to eruption. Patagonian Basalt Study Such a study has now been undertaken.6 Olivine crystals were separated from garnet-bearing mantle xenoliths within the Quaternary (post-Flood) alkali olivine basalt flows of Pali-Aike, Chile,7 for Fourier Transform Infrared at the centers of the mineral crystals in these xenoliths indicate that these pieces of mantle rock were originally under temperature and pressure conditions corresponding to a depth of 60-80 kilometers.8 Furthermore, there is no geophysical evidence below these lava flows of a magma chamber in which the xenoliths could have been stored for an extended period and become equilibrated with their host magma during transport from the mantle to the earth's surface. The alkali basalts hosting the mantle xenoliths erupted at a temperature estimated to have been between 1200ºC and 1290ºC.9 Furthermore, FTIR measurements of the visible clinopyroxene crystals (phenocrysts) in the basalts show no evidence of hydroxyl (OH) incorporated in them. This, together with the absence of amphibole, indicates that the basalts were undersaturated in water, making the basaltic magma which transported the mantle xenoliths an effective "sink" (or potential receiver) for hydrogen. An environment thus existed in which the mantle xenoliths could have become progressively dehydrated during magma ascent, and in proportion to the rate of ascent. Profiles of FTIR measurements across individual grains in the mantle xenoliths revealed that the water distribution in the pyroxene grains was homogeneous, in contrast to the olivine grains where their rims were hydroxyl depleted.10 In total, thirty olivine grains were studied, and all olivine grains larger than 0.8 mm across had hydroxyl-depleted rims. Additionally, profile measurements were repeated on two of the olivine grains while crystallographically oriented, because it is known that hydrogen diffusion in olivine is related to its crystal structure. These measurements confirmed that the rims of the olivine grains in the mantle xenoliths were hydroxyl-depleted. This indicates that this olivine was dehydrating in the water-undersaturated host basalt magma as the mantle xenoliths were engulfed by it and transported up from 60-80 kilometers depth to the earth's surface. Calculating a Rapid Ascent Rate Using experimentally-determined diffusion coefficients for hydration of olivine,11 water diffusion profiles were calculated for all three crystallographic axes of an olivine grain at a temperature of 1245±45ºC for various durations, with an initial water content of ~312 weight parts per million (wt ppm) and a final water content of 0 wt ppm at its rim. Thus it was possible to approximate the ascent rate of the mantle xenoliths and, by extension, their host basalt. The calculated ascent rates ranged from 1.9 hours at 1290ºC to 3.4 hours at 1245ºC and 6.3 hours at 1200ºC. Furthermore, FTIR analyses across cracks in the olivine grains did not exhibit any perturbations of the hydrogen profiles, so hydrogen diffusion from the grain rims occurred predominantly prior to the cracking of the grains near the earth's surface or after the eruption of the host basalt. Therefore, these mantle xenoliths must have reached the earth's surface in a matter of only several hours. Assuming a depth of origin for the xenoliths of 60-80 kilometers, the corresponding ascent rate is 6±3 meters per second (13.5±6.5 miles per hour). Because these xenoliths are denser than the host magma, this estimate gives a minimum ascent rate for the host alkali basalt magma. That equates to this basalt magma only taking between two and eight hours to travel from the upper mantle to erupt at the earth's surface. Such a rapid ascent to the earth's surface is consistent with the freshness of these xenoliths and is similar to the ascent rate of four meters per second determined for volatile-rich kimberlite magmas containing diamonds. Any claim that the eruptions of basalt lava flows are a timescale problem for the Genesis Flood on a young earth can now be easily dismissed. If it only takes basalt magmas between two and eight hours to travel from their upper mantle sources to erupt through volcanoes at the earth's surface, then many basalt volcanic eruptions could have easily occurred during the Flood year. Furthermore, the volume and scale of the basalt lavas found in the geologic record, such as the so-called flood basalts of the Deccan and Siberian Traps,12 testify to the global catastrophism operating in the Flood year, in contrast to today's occasional, small, and relatively insignificant basalt eruptions. The bigger question is how so much of the upper mantle rock partially melted quickly enough to generate those enormous volumes of flood basalts. However, during the Flood year the pre-Flood ocean floor ruptured into plates that sank into the mantle via thermal runaway subduction, the resulting mantle-wide convective flow generating huge mantle plumes and rapid melting of enormous volumes of upper mantle rock beneath the mid-ocean rift zones.13 Thus catastrophic plate tectonics during the Flood is the only viable explanation for the many basalt flows found in the earth's rock record. And this new experimental evidence confirms the rapid ascent and eruption of basalt lavas, consistent with the Biblical framework of earth history. - Hall, A. 1996. Igneous petrology, 2nd ed. Harlow, England: Addison Wesley Longman Ltd. - Kelley, S. P., and J.-A. Wartho. 2000. Rapid kimberlite ascent and significance of Ar-Ar ages in xenolith phlogopites. Science - Bell, D. et al. 2003. Hydroxide in olivine: A quantitative determination of the absolute amount and calibration of the IR spectrum. Journal of Geophysical Research 108 doi: 10.1029/2001JB000679. - Hirth, G., and D. L. Kohlsedt. 1996. Water in the oceanic upper mantle: implications for rheology, melt extraction and the evolution of the lithosphere. Earth and Planetary Science Letters 144:93-108. Regenauer-Lieb, K., et al. 2001. The initiation of subduction: Criticality by addition of water? Science 294:578-580. - Ingrin, J., and H. Skogby. 2000. Hydrogen in nominally anhydrous upper-mantle minerals: Concentration levels and implications. European Journal of Mineralogy 12:543-570. - Demouchy, S. et al. 2006. Rapid magma ascent recorded by water diffusion profiles in mantle olivine. Geology 34:429-432. - Skewes, M. A., and C. R. Stern. 1979. Petrology and geochemistry of alkali basalts and ultramafic inclusions from the Pali-Aike Volcanic Field in southern Chile and the origin of the Patagonian Plateau lavas. Journal of Volcanology and Geothermal Research - Stern, C. R. et al. 1999. Evidence from mantle xenoliths for relatively thin (<100 km) continental lithosphere below the Phanerozoic crust of southernmost South America. Lithos 48:217-235. - D'Orazio, M. et al. 2000. The Pali-Aike Volcanic Field, Patagonia: Slab-window magmatism near the tip of South America. Tectonophysics 321:407-427. - Demouchy et al., ref. 6. - Kohlstedt, D. L., and S. J. Mackwell. 1999. Solubility and diffusion of "water" in silicate minerals. In Microscopic properties and processes in minerals, ed. K. Wright and R. Catlow, 539-559. Dortrecht, The Netherlands: Kluwer Academic Publishers. - Jerram, D. A., and M. Widdowson. 2005. The anatomy of continental flood basalt provinces: Geological constraints on the processes and products of flood volcanism. Lithos 79:385-405. - Austin, S. A., J. R. Baumgardner, D. R. Humphreys, A. A. Snelling, L. Vardiman, and K. P. Wise. 1994. Catastrophic plate tectonics, In Proceedings of the Third International Conference on Creationism, ed. R. E. Walsh, 609-621. Pittsburgh, PA: Creation Science Fellowship. 289:609-611. 6:3-25. Cite this article: Snelling, A. 2007. The Rapid Ascent of Basalt Magmas. Acts & Facts. 36 (8): 10.
How did millions of mammoth fossils form? Q: ‘It has been guesstimated that there are the remains of some six million woolly mammoths in the Arctic Circle alone. My neighbour states that it would be impossible for such a number to have multiplied in the approx. 1700 years from Creation to the Flood.’ — J.E. A: First, six million mammoths is hugely exaggerated. There are fewer than 50 known woolly mammoth carcasses, only about a half-dozen of which were complete. An estimated 50,000 tusks have been found, although there may have been a million mammoths living at one time. Second, modern creationists think that the mammoths were not fossilized by the Flood. Rather, they were fossilized about 700 years later by catastrophes towards the end of the Ice Age, which was an aftermath of the Flood. This is shown by the fossil locations — always in deposits near the surface throughout the mid and high latitudes, mostly in river valleys, occasionally in ice wedges.1 Third, the large numbers are a problem for the sceptic only because he has not performed the simple calculations required. Consider that the African elephant reaches breeding age at about 14, and its gestation period averages 670 days, while the Indian elephant matures even earlier and has a shorter gestation time.2 Thus it would not be unrealistic to assume that a single mammoth pair could have four offspring by the age of 25. So it is actually generous to the sceptic to assume that the population could double four times per century (even if the parents in each generation died soon after their offspring were weaned). The mammoths would probably have multiplied quite quickly after the single mammoth pair3 disembarked from the Ark, since there was less competition around. It takes only 22 population doublings to exceed eight million, and this number could be reached in only 550 years. - This is explained in the children’s book Life in the Great Ice Age by Michael and Beverly Oard, and Michael’s more technical book An Ice Age Caused by the Genesis Flood. Also, Creation 19(1):42–43, 1996, has an interview with Oard. - ‘Elephant’, Encyclopædia Britannica, 4:441–442, 15th Ed. 1992. - This assumes that representatives of every genus of land vertebrate were on board the Ark — see Woodmorappe, J., Noah’s Ark: a Feasibility Study, Institute for Creation Research, El Cajon, CA, USA, 1996. But it’s likely that mammoths, mastodons, stegodons and modern elephants descended from a created elephantine kind. (Available in Finnish)
Only 250 miles separates the island of Madagascar from the southeast coast of Africa. The short distance between the two land masses traditionally led the outside world to assume that the native inhabitants of Madagascar – known as the Malagasy – originally came from the west, probably from the present day southeast African nation of Mozambique. Yet upon closer examination of the Malagasy’s language and their physical features, many scholars began to question this notion. The Malagasy of the central plateau of Madagascar, known as the Highlanders, had light skin and facial features more akin to Southeast Asia or Indonesia. They also practiced a rice culture that was not unlike the rice cultures of Asia. And yet the coastal Malagasy, known as the Côtiers, seemed just the opposite. They had darker skin and curly hair that was more similar to modern day Africans. But both the Highlanders and the Côtiers speak the same language, which shares 90% of its vocabulary with a language spoken today in Southeast Borneo, and which has been officially classified as a branch of the Austronesian language family called West Malayo-Polynesian. So how could a significant portion of Malagasy seem to share more in common with a region 5,000 miles away than they do with mainland Africa? Trying to find the answers to these questions has vexed archaeologists, historians and linguists for generations. Over the past several years, geneticists have entered the fray to try and unravel the mysterious origins of the Malagasy. Their most recent effort appears this week in Molecular Biology and Evolution. This study, led by Sergio Tofanelli of the University of Pisa, built upon a 2005 study by Matt Hurles and colleagues that was the first genetic exploration of the Malagasy people. But Tofanelli and his colleagues wanted to dig even deeper into the genetic history of the Malagasy. So they took the data analyzed by Hurles in addition to new DNA samples that were collected from people across the island of Madagascar. They focused on two regions of the human genome often used in genetic ancestry studies: the mitochondrial DNA (mtDNA) and the Y chromosome. Because the mtDNA is used to trace maternal ancestry, and the Y chromosome to trace paternal ancestry, analyzing both in the same study can give a more complete picture of a group’s genetic history. Tofanelli and his research team examined the mtDNA and Y chromosomes of Malagasy individuals scattered across the island, from both the Highlander and Côtiers groups. They were searching for any clues that would give an exhaustive understanding of how and when the island of Madagascar was first settled, and by whom. The researchers’ analysis revealed a mixture of both African and Asian genetic ancestry, in both the Highlanders and the Côtiers, which is perhaps contrary to the two groups’ physical apperance. So what does this mean? That even the Côtiers people, who often look more African in appearance, have an ancestry that traced back to Asia, specifically Borneo. These results fit well with Hurles’ study and with what linguists have been saying for years; that the Malagasy language – while clearly tracing back to Borneo – also has some African elements that are significant. The results from these analyses then begged the next question — how and when did the earliest inhabitants of Madgascar arrive on the island? Was it in two separate migrations – one from the east and one from the west – or did the Asian/African genetic make-up of the Malagasy exist prior to their first steps on Madagascar? It is easy to assume that any intermarriage between Africans and Southeast Asians happened after each arrived on the island. In fact, Tofanelli describes the genetic make-up of the Malagasy as a consequence of “the encounter of people surfing the extreme edges of two of the broadest historical waves of expansion” in human history. He is referring to the sub-Saharan African Bantu expansions that began 5,000 years ago and swept across Africa from Cameroon to Mozambique and southern Africa, and the Austronesian expansions about 4,000 years ago when seafarers journeyed from Taiwan to Borneo and beyond. But Tofanelli proposes an alternative hypothesis as well. He argues for a long history of contact between Bantu-speaking Africans and seafarers from Borneo dating back thousands of years. As evidence he cites banana cultivation in Cameroon and Uganda that can be traced back to Southeast Asia, as well as the introduction of humped cattle into Africa from Asia. If the Southeast Asians and eastern Africans shared farming techniques, it stands to reason that they may have shared genes as well. Thus the people of Madagascar may have not simply been Africans and Southeast Asians arriving on the island from opposite directions, but rather they represent a more complex genetic history of proto-Malagasy arriving on Madagascar about 2,300 years ago, already containing a mixture of Asian and African ancestry. This hypothesis most certainly needs additional evidence and data before it can be supported, but it brings a new level of understanding to the mysterious origins of the Malagasy.
The parallel rules are used to plot courses, bearings, and celestial lines of position. By “walking” the rules across the chart, the navigator transfers the desired angle from the compass rose on the chart to the part of the chart where the ship is, or vice versa. A pair of triangles can do the same thing, while course plotters are designed to minimize effort in laying out angles. Lobsterman getting traps ready in the spring. Traps have to be cleaned, dried, and repaired. This was especially hard work in the days of wooden traps, which rotted over time. Traps are being loaded onto a lobster boat that is just visible. A peapod is pulled out and set upside down on the nearby dock. Berry probaly took this photograph in New Harbor or Round Pond. Each warp or line coming up from a pot or two on the bottom of the sea ends at a lobster pot buoy on the surface. Originally these buoys were cut with a hatchet from a small spruce trunk. Once laths became available, fishermen could turn buoys from larger pieces of wood. Small wood shops could make them in quantity for sale. Now buoys are hard foam. Lobster measure, for measuring the carapace, or body shell, of the lobster. The shorter measure is 3 1/4", and that is the minimum allowable size of the carapace; the maximum is 5". If the lobster carapace is between these two lengths, and the lobster is not an egg-bearing female, it may be kept. Otherwise, the lobster must be thrown back. This measure has a float attached to it, so that it won't sink if dropped overboard. Small wide rubber bands are used hold the large claws of the lobster closed, in order to keep lobsters from hurting each other when stored or shipped together. Before rubber bands, lobster fisherman whittled plugs that could be inserted into a lobster's claw to prevent them from opening. This diagram shows important tools of the navigator for dead reckoning and piloting, including the lead line, chip log, a mechanical log, and a compass. For communications, speaking tubes are shown on the left. This illustration is from H. Paasch's Illustrated Marine Encyclopedia, 1890, Plate 98. For a 17th century navigator only the mechanical log would have been new. Triangles provide a simple and inexpensive alternative to the parallel rule for transferring a bearing or course from the compass rose to the ship’s position on a chart. These triangles are made of wood.
- ABOUT US Deforestation accounts for about a fifth of all global carbon emissions, second only to the burning of fossil fuels, according to the United Nations Framework Convention on Climate Change (UNFCCC).1 But ever since the Kyoto Protocol was drafted in 1997, countries have been divided over how to incorporate forest protection into global emission-reduction plans. At the UNFCCC meeting in Montreal in 2005, the Coalition for Rainforest Nations, a group of nine countries led by Papua New Guinea, proposed that there be incentives for countries to control emissions by reducing deforestation. Their proposal evolved into REDD: Reducing Emissions From Deforestation and Forest Degradation, the United Nations’ model for curbing deforestation. Under REDD, polluters could offset their emissions by purchasing carbon credits generated by governments, companies, and communities like Cocomasur that protect forests. But as countries began hammering out a post-Kyoto climate plan in subsequent UNFCCC meetings, the chorus of REDD critics grew, particularly with regard to carbon trading. Opponents say measuring the amount of carbon stored in forests is complex and prone to errors, and that protecting one part of a forest may have the unintended consequence of shifting deforestation to adjacent, unprotected areas. Critics point out that offsets could allow companies to continue polluting with no net reduction in carbon dioxide emissions. Indigenous peoples and other communities that depend on forests for their survival are also concerned that REDD may not provide sufficient safeguards to prevent land grabs. In countries where land tenure and rights to forest carbon are not well defined, they fear the financial incentives of REDD could lead to displacement and other abuses. For now, a formal UN framework for REDD is on hold but likely to be developed in the coming years. Delays in the UN process have not, however, prevented the emergence of hundreds of REDD pilot projects around the world, many in developing countries with funds provided by individual countries, the World Bank, private donors, and other sources. Nations with weak governance and limited resources often struggle to uphold the rights of forest peoples. But certification programs through groups such as the Climate, Community, and Biodiversity Alliance and Verified Carbon Standard are encouraging pilot projects that adhere to higher environmental standards and community protections, creating a blueprint for future models that aim to be more equitable and inclusive. Autumn Spanne is an independent journalist who writes about climate change, biodiversity, and environmental justice. She has an M.S. from Columbia University’s Graduate School of Journalism. 1. United Nations Framework Convention on Climate Change, “Fact Sheet: The Need for Mitigation,” November 2009, available at unfccc.int. For more on this topic read: Carbon-Offset Conservation in the Chocó. Read the rest of NACLA's Fall 2012 issue: "#Radical Media: Communication Unbound." Did you find this useful? Donate to NACLA
Researchers have pinpointed that earth’s recovery from the prehistoric crisis of the greatest mass extinction has taken long time of about 10 million years. Our planet, due to a great cataclysm that occurred 250 million years ago, experienced mass extinction with only 10 percent of the plants and animals surviving. According to a new article published in Nature Geoscience , the planet’s recovery from this massive destruction and its consequences had long delay estimated for about 10 million years. There were two reasons behind the delay, the sheer intensity of the crisis and the continuance of grim conditions on Earth after the first wave of extinction, suggested Dr Zhong-Qiang Chen from the China University of Geosciences in Wuhan, and Professor Michael Benton from the University of Bristol. Life on earth was severely threatened by the ancient crisis that induced a number of physical environmental shocks such as global warming, acid rain, ocean acidification and ocean anoxia. The study indicated that the harsh conditions have continued for some five to six million years after the initial crisis and led to the killing of 90 percent of living things on land and in the sea. "It is hard to imagine how so much of life could have been killed, but there is no doubt from some of the fantastic rock sections in China and elsewhere round the world that this was the biggest crisis ever faced by life," Dr Chen said. Researchers believe though some groups of animals on the sea and land did recover quickly and began to rebuild their ecosystems, they suffered further setbacks. After the environmental crises, many complex ecosystems emerged, for instance ancestral crabs and lobsters came on the scene and formed the basis of future modern-style ecosystems.
Source: National Institute of Standards and Technology Physics Laboratory Not until somewhat recently (that is, in terms of human history) did people find a need for knowing the time of day. As best we know, 5000 to 6000 years ago great civilizations in the Middle East and North Africa initiated clock-making. With their bureaucracies and formal religions, these cultures found a need to organize their time more efficiently. After the Sumerian culture was lost without passing on its knowledge, the Egyptians were the next to formally divide their day into parts something like our hours. Obelisks (slender, tapering, four-sided monuments) were built as early as 3500 B.C. Their moving shadows formed a kind of sundial, enabling citizens to partition the day into two parts by indicating noon. They also showed the year's longest and shortest days when the shadow at noon was the shortest or longest of the year. Later, markers added around the base of the monument would indicate further time subdivisions. Another Egyptian shadow clock or sundial, possibly the first portable timepiece, came into use around 1500 B.C. to measure the passage of “hours.” This device divided a sunlit day into 10 parts plus two “twilight hours” in the morning and evening. When the long stem with 5 variably spaced marks was oriented east and west in the morning, an elevated crossbar on the east end cast a moving shadow over the marks. At noon, the device was turned in the opposite direction to measure the afternoon “hours.” The merkhet, the oldest known astronomical tool, was an Egyptian development of around 600 B.C. A pair of merkhets were used to establish a north-south line by lining them up with the Pole Star. They could then be used to mark off nighttime hours by determining when certain other stars crossed the meridian. In the quest for more year-round accuracy, sundials evolved from flat horizontal or vertical plates to more elaborate forms. One version was the hemispherical dial, a bowl-shaped depression cut into a block of stone, carrying a central vertical gnomon (pointer) and scribed with sets of hour lines for different seasons. The hemicycle, said to have been invented about 300 B.C., removed the useless half of the hemisphere to give an appearance of a half-bowl cut into the edge of a squared block. By 30 B.C., Vitruvius could describe 13 different sundial styles in use in Greece, Asia Minor, and Italy. Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved. More on Sundials from Infoplease:
Nov. 22, 2010 Jump ropes are used by kids for fun and by athletes for training. But what about the underlying physics? How do jump ropes work? Can important engineering principles be studied? Jeff Aristoff and Howard Stone of Princeton University have built themselves a robotic jump rope device that controls all the rope parameters -- rope rotation rate, rope density, diameter, length, and the distance between "hands." They capture the motion of the ropes by high-speed cameras, one to the side and one at the end. Then they compare the observed behavior with predictions made by their equations -- work they presented November 21 at the American Physical Society Division of Fluid Dynamics (DFD) meeting in Long Beach, CA. "Our main discovery is how the air-induced drag affects the shape of the rope and the work necessary to rotate it," says Princeton researcher Jeff Aristoff. "Aerodynamic forces cause the rope to bend in such a way that the total drag is reduced." (Leaves do this too when they bend out of the wind.) This deflection or twisting is most important in the middle of the rope and the least at the ends. If the rope is too light it might not clear the body of the jumper. "Implications for successful skipping will be discussed, and a demonstration is possible," said Aristoff about his presentation at the meeting. "Fluid dynamic effects on long flexible filaments occur in both engineered structures and many natural systems, so insights from the jump rope will hopefully inform other common situations," he added. Other social bookmarking and sharing tools: Note: If no author is given, the source is cited instead.
February 5th is Digital Learning Day – a day that celebrates effectively using technology to strengthen a student’s learning experience and provide opportunities for individualized instruction. We asked one of our expert instructional math coaches, Ed L., to share a fun lesson idea to celebrate! Check out his suggestions to teach abstract concepts using a free resource that empowers students to create and share simulations. You can also check out Ed’s Game Based Learning series for more ways to engage students. Students often have difficulty with theoretical or abstract concepts. Many benefit from being able to play with the concepts in a simulated environment. Netlogo is a free resource with a supportive community that allows creation and sharing of learning simulations that teach by experimentation and play. You can use simulation environments in many ways. Netlogo offers wonderful opportunities to either create your own simulation or use an existing one. Existing simulations on Netlogo are listed by academic concentrations. Each model offers suggested uses with students including instructions, a list of inquiry questions to guide student learning, and suggestions on how to extend learning. Models are designed to be used by individuals, small groups or as full classroom experiences. Teachers I work with have found success using the following sequence: - Demonstrate how the model works. - Propose an inquiry question and students record their predictions. - Students share predictions with peers and then modify as desired. - The simulation is run to test the hypothesis. - Students individually reflect on their prediction and the observed results. - Repeat steps 2-5 with each question or challenge increasing the depth of learning and exploration Even if you have no programming background, it is easy and fun to create a simulation with Netlogo. Just follow these steps: - Clarify the field of exploration to specifically highlight objectives of the simulation. - List key variables and the range of values that should be allowed. - Creation of the sliders and interface for those values. - Setting up the programming flow for the simulation (this step may need to be broken into planning stage then actual programming stage for some learners). - Test the simulation. - Document resources for how others use the model, questions or challenges, and extensions or variations. Regardless if teachers choose to use existing models or encourage the creation of new models, the depth of learning is extensive in Netlogo. Student motivation is increased while still maintaining structured purpose to the playful environments available. Share your experiences with others and join the community of those learning with simulations through Netlogo!
The change from ancient to medieval costume began (c.400) with the disintegration of the Roman Empire. Roman attire, which had previously assimilated the elaborate features of Byzantine dress, was gradually affected by the austere costume of the barbaric invader. Both men and women wore a double tunic; the under tunic, or chemise, had long tight sleeves (a feature that remained until the 17th cent.) and a high neck; the girded wool overtunic, or robe, often had loose sleeves. A mantle, or indoor cloak, was also worn. After 1200 a great variety of fine fabrics from the East were available as a result of the Crusades, and the elegant dress of feudal Europe was evolved. With the introduction of various ways of cutting the basic garment, fashion, or style, began. A long, girded tunic, then called the cote or cotte, continued to be worn over the chemise by both men and women; a surcote (sleeveless and with wide armholes) was often worn over it. At this time family crests, or coats of arms (see blazonry; heraldry; crest), became popular, and particolored garments came into vogue. Proper fit was increasingly emphasized, and by 1300 tailoring had become important and buttons had become useful as well as ornamental. The belted cote-hardie, with a close-fitting body and short skirt, was worn over a tighter, long-sleeved doublet and a chemise. And, as men's legs were now exposed, hose were emphasized. The introduction (c.1350) of the houppelande, or overcoat, marked the first real appearance of the collar. Over a chemise and corset women wore a gown with a V neck and a long, flowing train; the front of the skirt was often tucked into the high-waisted belt. In its extreme, the style of the period was typified by profuse dagging (scalloped edges), exaggerated, hanging sleeves, pointed slippers, and fantastic headdresses (see headdress and veil). Sections in this article: More on costume Medieval Costume from Fact Monster: See more Encyclopedia articles on: Fashion
includes one or more holidays that are rooted in its past. In the United States, for example, one of the most celebrated holidays is Thanksgiving. Annual traditions typically include: dinners with roasted turkey, potatoes, and pumpkin pie; decorations with multi-colored Indian cob corn; and, school plays re-enacting the First Thanksgiving But do these traditions accurately reflect what happened? Well, maybe. In 1621, the pumpkin pie was more likely just boiled pumpkin and there were no potatoes. Turkey may have been on the menu, but for certain the feast also included eel, clams, and venison. During this week's lesson, you will investigate what is fact and fiction about the First Thanksgiving. Along the way, you will gain skills on how to track down the past through history's clues Investigating First Thanksgiving your journey at Plimoth Plantation's, where You are the Historian: Investigating the First Thanksgiving. Click to open and watch the Intro. Meet Dancing Hawk and Sarah, who will help guide you, and then Enter to begin your investigation. Start at the top with Fact or Myth? Read the choices and see if the true answer matches yours, and then, Enter. Read Sarah's explanation, and then uncover the myths by dragging the each myth to its matching illustration. When you are done, visit the expert, Historian Kathleen Curtin, to hear her explain the difference between "history" and the "past." Discuss the difference Return Home to select your next task, reviewing The Evidence. Reveal the answer to the question, and then Enter. Read and write down the definition of primary source. Listen to the letter being read or use the magic lens to interpret the letter. What does the letter reveal? What mysteries does it not answer? How do historians fill in the gaps? the highlighted phrases to read the historian's best guesses. When you have read all of the notes, click on Dancing Hawk to visit the expert. Jolyon Rollins explains further about how to interpret primary Return Home to visit The Wampanoag People and The English Colonists. What do these primary sources tell us about life in 1621? In what ways did the native and English peoples look at the world differently? How do their viewpoints and basic needs compare to Next, follow the Path to 1621—along the way, you will learn about the events leading up to the shared harvest feast. What do the sources reveal about each group's viewpoints? How do these histories contribute to our understanding of past events? Lastly at the Plantation, Share what you've discovered about the 1621 event by creating and printing one of the exhibit projects. the First Thanksgiving is just one example of into how historians use, analyze and interpret primary sources. From here, let's broaden your lens and build your historian skills further by visiting George Mason University's World History Sources site. Begin by Finding World History. Read the framing essay to learn about how to evaluate historical resources on the Web. Why is the context of where or how you find historical information important? Why is it important to cross-reference information? Return to the section's main menu and explore one or more of the Regions or Time Periods. Read the summary about each of the linked sources and click to visit a few of their Web sites. In one sentence, describe the types of primary sources and the information's context at each site. In what ways does the information provided by each site contribute to historical understanding of that region Now, get some skills at Unpacking the Evidence. Thoroughly explore each of the guides, and create a reference sheet for later use that summarizes each type of primary source, how to analyze it, how to consider context, and how to best interpret the information it provides: Review the Analyzing Documents section, to follow some examples of how history scholars use the evidence they find in each of these types of sources for interpreting Read through current issues of The Cincinnati Enquirer and identify an event for which three or more primary sources may be generated—maybe there was an election, a parade, a music concert, a severe storm, etc. Select one event, and then think about and write down possible historical sources related to that event. From an election, for example, the newspaper's text and photos serve as one type of source, but also the winning and losing candidates may each keep a personal diary, or maybe there are banners or flyers promoting the candidate that someone may hold onto as keepsakes, or perhaps a blog may provide a variety of viewpoints about the candidates or Election Day overall. Based on your list, collect information from three or more sources. If you can not collect directly from a source, then create something that could serve that purpose. For example, based on what you know about the candidate from other sources, you could write a personal account from the viewpoint of the winning or losing candidate. From an historian's perspective, create an exhibit that interprets the sources and describes the event. Present your exhibit to your classmates and discuss any gaps. If desired, repeat the exercise by finding events in the newspaper's archives from five or more
The aging of star clusters is linked more with their lifestyle than with how old they actually are, according to a new NASA/ESA Hubble Space Telescope study coauthored by Steinn Sigurdsson, professor of astronomy and astrophysics at Penn State. "Our observations of star clusters have shown us that, although they all formed over ten billion years ago, some of them are still young at heart," Sigurdsson said. "We now can see how fast the clusters are racing toward their final collapse. It is as if each cluster has its own internal clock, some of which are ticking slower than others." Sigurdsson is a Penn State theorist working in collaboration with the European Research Council's Cosmic-Lab project. The study is published in the current issue of the journal Nature. Globular clusters are spherical collections of stars, tightly bound to each other by their mutual gravity. The roughly 150 globular clusters in the Milky Way contain many of our galaxy's oldest stars. These 12-to-13 billion-year-old relics of the early universe are nearly as old as the Big Bang. "Although these clusters all formed billions of years ago, we wondered whether some clusters might be aging faster or slower than others," said Francesco Ferraro of the University of Bologna in Italy, the leader of the team that made the discovery. "By studying the distribution of a type of blue star that exists in the clusters, we found that some clusters had indeed evolved much faster over their lifetimes, and we developed a way to measure the rate of their aging." Star clusters form in a short period of time, so all the stars within them tend to have roughly the same age. Because bright, high-mass stars burn up their fuel quite quickly, and globular clusters are very old, the clusters should contain only low-mass stars within them. But Sigurdsson and his colleagues discovered that, in certain circumstances, stars can be given a new burst of life. "Stars can receive extra fuel that bulks them up and substantially brightens them if one star pulls matter off a neighbor, if two neighboring stars merge together, or if two stars collide," Sigurdsson said. These reinvigorated stars have a large mass and high brightness. They are called blue stragglers because they are blue in color and their evolution lags behind that of their neighbors. Blue stragglers are the only stars that combine high mass and high brightness within clusters. Heavier stars sink like sediment toward the center of a cluster as the cluster ages. The high-mass blue stragglers are strongly affected by this process, and their brightness makes them relatively easy for astronomers to observe. To better understand cluster aging, the team mapped the location of blue-straggler stars in 21 globular clusters, as seen in images from the Hubble Space Telescope, the European Southern Observatory's MPG/ESO 2.2-meter telescope, the Canada-France-Hawaii telescope, and the Subaru Telescope of the National Astronomical Observatory of Japan. Hubble provided high-resolution imagery of the crowded centers of 20 of the clusters, while images from ground-based telescopes gave a wider view of their less-busy outer regions. Analyzing the observational data, the team found that a few clusters appeared young, with blue-straggler stars distributed throughout, while a larger group appeared old, with the blue stragglers clumped in the center. A third group was in the process of aging, with the stars closest to the core migrating inwards first, then stars ever further out progressively sinking towards the center. "Since these clusters all formed at roughly the same time, this study reveals big differences in the speed of evolution from cluster to cluster," said Barbara Lanzoni at the University of Bologna, a co-author of the study. "In the case of fast-aging clusters, we think that the sedimentation process can be complete within a few hundred million years, while for the slowest it would take several times the current age of the universe." As a cluster's heaviest stars sink into the center, it eventually experiences a phenomenon called core collapse, where the center of the cluster bunches together extremely densely. The processes leading toward core collapse are rather well understood, and revolve around the number, density and speed of movement of the stars. However, the rate at which they happen was not known until now. "This study provides the first evidence, based totally on data from observations, of how quickly different globular clusters age," Sigurdsson said.
The History of Martin Luther King Day Who originated the idea of a national holiday in honor of MLK? by Shmuel Ross and David Johnson It took 15 years to create the federal Martin Luther King, Jr., holiday. Congressman John Conyers, Democrat from Michigan, first introduced legislation for a commemorative holiday four days after King was assassinated in 1968. After the bill became stalled, petitions endorsing the holiday containing six million names were submitted to Congress. Conyers and Rep. Shirley Chisholm, Democrat of New York, resubmitted King holiday legislation each subsequent legislative session. Public pressure for the holiday mounted during the 1982 and 1983 civil rights marches in Washington. Congress passed the holiday legislation in 1983, which was then signed into law by President Ronald Reagan. A compromise moving the holiday from Jan. 15, King's birthday, which was considered too close to Christmas and New Year's, to the third Monday in January helped overcome opposition to the law. National Consensus on the Holiday A number of states resisted celebrating the holiday. Some opponents said King did not deserve his own holiday—contending that the entire civil rights movement rather than one individual, however instrumental, should be honored. Several southern states include celebrations for various Confederate generals on that day. Arizona voters approved the holiday in 1992 after a tourist boycott. In 1999, New Hampshire changed the name of Civil Rights Day to Martin Luther King, Jr., Day. More on Dr. Martin Luther King, Jr. Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved.
Since the retreat of the great glaciers about 10,000 years ago, Aboriginal populations have inhabited the BC landscape. BC's first people may have journeyed to the region from Asia via a land bridge across the Bering Sea. As the ice receded, forests advanced and fluctuating sea levels exposed the temporary land passage linking Asia to the New World. It is thought that BC's coastal region became one of the most densely populated areas in North America. Prior to European contact, BC's First Nations populations may have numbered some 300,000. The Aboriginal way of life would continue undisturbed for thousands of years, until the arrival of the British in 1778. When British naval explorer Captain James Cook reached the west coast of Vancouver Island in 1778, he was eager to trade with the Nuu-chah-nulth (Nootka) people. In his wake, waves of European settlers arrived, carrying smallpox and other diseases that decimated Aboriginal populations in the late 1700s. Nearly a century later, British agent James Douglas was searching the Pacific Coast for a new Hudson's Bay Company headquarters. He was welcomed by the Lekwammen, whose villages dotted the shores of what is now Greater Victoria. Douglas settled in and selected a site called Camosack. A year later, in 1843, Fort Victoria was built in the area now known as Old Town, the heart of Victoria's downtown. Gold Rush in BC The discovery of gold in the Fraser River and the Cariboo brought a rapid influx of prospectors, merchants, pioneers and other colourful figures to BC in the 1860s. They came from around the world, arriving from as far away as China. It was a time of rapid economic expansion; sleepy hamlets became bustling cities, and new roads, railways and steamships were constructed to carry the extra load. Boomtowns were born and legends made, but not all experienced good fortune. The Aboriginal peoples lost most of their ancestral lands and, in 1876, First Nations populations were made subject to the federal Indian Act, which regulated every aspect of their lives. Rapid Expansion in BC Transportation and development marked another period of rapid economic expansion during the 1950s and 60s. Massive building projects changed the shape of the BC landscape. Expansive damming projects turned rivers into lakes; giant turbines powered dozens of new pulp mills and smelters; and the Trans Canada Highway was completed, while new bridges, railways, and BC Ferries linked land, people and technological progress. BC's Cultural Diversity Today, BC's population is wonderfully diverse. More than 40 major Aboriginal cultural groups are represented in the region. The province's large Asian communities have made Chinese and Punjabi the most spoken languages after English. There are also sizeable German, Italian, Japanese and Russian communities – all creating a vibrant cultural mosaic in which distinct cuisine, architecture, language and arts thrive. In 1986 the City of Vancouver celebrated its centennial, hosting the Expo '86 World Exposition. That same year, the Sechelt Indian Band was the first Aboriginal group in BC to gain a municipal style of self-government. In 2000, the Nisga'a Treaty came into being. The Nisga'a Nation, who has lived in the Nass area since time immemorial, negotiated with the provincial and federal governments to achieve BC's first modern-day, constitutionally protected self-governance agreement. This marked a momentous achievement in the history of the relationship among British Columbia, Canada and First Nations. In February and March 2010, Vancouver was the host city for the 2010 Olympic and Paralympic Winter Games.
TenMarks teaches you how to find and evaluate roots. Read the full transcript » Learn to Evaluate Roots In this lesson, let's learn how we find roots. Roots are the following, if we take a number and multiply it by itself, we get the product, 7x7 is 49. So a number which when multiplied by itself gives a product. Seven is called a square root of 49. So if a number is multiplied by itself to get the product, the number is the square root of the product. So that’s the thing that we need to remember when we’re trying to find roots. So let's try and do a few problems. We need to find in Part A the square root of 49. Now this is the symbol for square root or root. If there is no number around it; that means it's the square root. So if we are trying to find the square root for 49, we need to find a number which when multiplied by itself will give us 49. Well, what’s that number, 7x7 gives me 49. So the square root of 49 is seven. All we need to do is find the number which when multiplied by itself will give you the product. The second thing we need to remember is a square root of 49 equals seven, then 7x7=49. So, square root of 49 equals seven, 7x7=49. They are reversible operations. So, let's do Part B. Part B is we need to find the cube root of -125. The number three before the root sign signifies that instead of the number being multiplied twice, it needs to be multiplied thrice. So which number when multiplied by itself three times will give us -125? Do we know what it is? It's actually -5, -5x-5 is +25, x -5 is -125. So -5 multiplied by itself three times gives us -125. So the cube root of -125=-5. That’s the answer we’re looking for. Let's do the third one right here, which is we need the square root of 1/4, which means what fraction when multiplied by itself gives us 1/4? Well, I just gave it to you. 1/2x1/2 gives us 1/4. Why? 1x1 is 1, 2x2 is 4. So 1/2 x 1/2 is 1/4 which means square root of 1/4=1/2. It's the same exact principle. What number multiplied by itself will give us the value within the root? Well, the number is 1/2. Let's try a slightly difficult problem which says evaluate the expression when A=9 and B=7. We need to take the square root of A+B. Well what is A+B? Let's substitute it. Instead of A, let's put it nine. Instead of B, let's put it seven. What do I get? The square root of 9+7 is 16. In order the square root of 16, we need to find two numbers or a number when multiplied by itself equals 16. Well, that’s four, right? 4x4 gives me 16. So the square root of 16 is 4. We simply find the number that when multiplied by itself gives you the number in underneath the root side.
What is an ependymoma? An ependymoma is a tumor that arises from the ependymal cells, which line the brain and spinal cord. Over 90 percent of ependymoma originate from the brain and 10 percent from the spinal cord. Ependymomas rarely spread (metastasize) from their site of origin. Ependymomas are classified as either supratentorial (in the cerebral hemispheres) or infratentorial (in the back of the brain). Variations of this tumor type include subependymoma, subependymal giant-cell astrocytoma, and malignant ependymoma. Ependymomas are the third most common type of central nervous system (CNS) tumor in children (following astrocytoma and medulloblastoma). Approximately 200 cases of childhood ependymoma are diagnosed in the U.S. each year. Ependymomas account for between six percent and 12 percent of brain tumors in children less than 18 years of age, but 30 percent of brain tumors in children less than three years of age. The average age at diagnosis is four to six years of age. What are the symptoms of an ependymoma? Symptoms of ependymoma are determined by where they are located within the brain or spine. Symptoms of ependymoma in the brain typically include headache, weakness, visual field cut or seizures, double vision. Ependymoma of the brain often cause hydrocephalus (too much cerebrospinal fluid) that can lead to headache, vomiting, poor balance, and decreased level of consciousness. Symptoms of spinal cord ependymoma may include weakness and/or loss of sensation, pain, and bowel or bladder dysfunction, and possibly severe pain in the lower back and legs. How is an ependymoma diagnosed? Patients suspected of having a brain or spinal cord tumor undergo a magnetic resonance imaging (MRI) scan of the brain and spine to further define the location of the tumor and to assess if there is any metastasis (spread) of the tumor. A biopsy of the tumor is required to make the final diagnosis of an ependymoma and to subtype the ependymoma according to World Health Organization guidelines. Patients with ependymoma of the brain require a spinal tap (lumbar puncture) to assess for any spread of the tumor through the spinal fluid. How is an ependymoma treated? The initial treatment for ependymoma is surgery. In general, the neurosurgeon will attempt to remove as much of the tumor as possible, without causing damage to the normal brain. A complete resection, confirmed by post-operative MRI or computed tomography (CT) scan, often yields a favorable prognosis. Although total resection is optimal, it is not always possible because vital structures can be involved by the tumor. Children whose have tumor spread within the cerebrospinal fluid (metastatic disease) benefit from craniospinal radiation therapy. Chemotherapy is reserved for patients with residual tumor following surgery (incomplete surgical resection) with the goal to shrink the tumor and make it more amenable to a second surgery. Chemotherapy is also used in children younger than three years of age in an attempt to delay radiation therapy until they are older. About treatment for ependymoma at Children’s Hospitals and Clinics of Minnesota Our cancer and blood disorders program consistently achieves excellent results ranking it in the top ten programs in the United States. Children’s Hospitals and Clinics of Minnesota treats the majority of children with cancer and blood disorders in Minnesota and provides patients access to a variety of clinical trials using ground-breaking new treatments. Through our renowned program, patients experience unparalleled family support, a nationally recognized pain management team, and compassionate, coordinated care. If you are a family member looking for a Children’s neuro-oncologist, please call our clinic at (612) 813-5940. If you are a health professional looking for consultation or referral information, please call Children's Physician Access at 1-866-755-2121 (toll-free) and ask for the on-call hematologist/oncologist. Additional information on Ependymoma For additional information, check out these web sites:
Dec. 29, 2000 A tiger's intimidating roar has the power to paralyze the animal that hears it and that even includes experienced human trainers. Elizabeth von Muggenthaler, a bioacoustician from the Fauna Communications Research Institute in North Carolina, presented her research at the Acoustical Society of America meeting in Newport Beach, California on December 7. Bioacoustics is the study of the frequency or pitch, loudness, and duration of animal sounds to learn about an animal's behavior. At the meeting, von Muggenthaler discussed her work analyzing the frequency of tiger sounds to better understand the part of a tiger's roar that we can feel, but can't hear. Why study something that we can't hear? "Humans can only hear some of the sounds that tigers use to communicate," says von Muggenthaler. "Humans can hear frequencies from 20 hertz to 20,000 hertz, but whales, elephants, rhinos, and tigers can produce sounds below 20 hertz." This low-pitched sound, called "infrasound," can travel long distances permeating buildings, cutting through dense forests, and even passing through mountains. The lower the frequency, the farther the distance the sound can travel. Scientists believe that infrasound is the missing link in studying tiger communication. In the first study of its kind, von Muggenthaler and her colleagues recorded every growl, hiss, chuff, and roar of twenty-four tigers at the Carnivore Preservation Trust in Pittsboro, North Carolina, and the Riverbanks Zoological Park in Columbia, South Carolina. Bioacousticians found that tigers can create sounds at about 18 hertz and when tigers roar they can create frequencies significantly below this. "When a tiger roars-the sound will rattle and paralyze you," says von Muggenthaler. "Although untested, we suspect that this is caused by the low frequencies and loudness of the sound." When the researchers played back a tape of recorded tiger sounds including audible and infrasounds, the tigers appeared to react to these sounds. Sometimes they would roar and leap towards the speakers and sometimes sneak away. The next step for von Muggenthaler is to take the recorded infrasounds to scientists who can determine whether or not tigers can hear the infrasounds. Von Muggenthaler hopes to learn more about tigers, protect them from extinction, and understand the unheard, paralyzing power in their roar. Other social bookmarking and sharing tools: The above story is reprinted from materials provided by American Institute of Physics -- Inside Science News Service. Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
Illinois Learning Standards Stage D - Math Students who meet the standard can demonstrate knowledge and use of numbers and their many representations in a broad range of theoretical and practical settings. (Representations) - Represent, order, and compare decimals to demonstrate understanding of the place-value structure in the base-ten number system ** - Identify prime numbers through 100. - Recognize equivalent representations for decimals and generate them by composing and decomposing numbers (e.g., 0.15 = 0.1 +0.005 ) - Represent fractions as parts of unit wholes, as parts of a set, as locations on a number line, and as divisions of whole numbers ** - Explore numbers less than zero by extending a number line and through familiar applications. * Students who meet the standard can investigate, represent and solve problems using number facts, operations, and their properties, algorithms, and relationships. (Operations and properties) - Describe classes of numbers according to characteristics such as factors and multiples. - Solve addition or subtraction number sentences and word problems using fractions with like denominators. - Solve multi-step number sentences and word problems using whole numbers and the four basic operations. - Select and use one of various algorithms to multiply and divide. Students who meet the standard can compute and estimate using mental mathematics, paper-and-pencil methods, calculators, and computers. (Choice of method) - Develop and use strategies (e.g., compatible numbers, front-end estimation) to estimate the results of wholenumber computations and to judge the reasonableness of such results. ** - Estimate the sum or difference of a number sentence containing decimals using a variety of strategies. Students who meet the standard can solve problems using comparison of quantities, ratios, proportions, and percents. - Determine 50% and 100% of a given group in context. Students who meet the standard can measure and compare quantities using appropriate units, instruments, and methods. (Performance and conversion of measurements) - Measure angles using a protractor or angle ruler. - Measure with a greater degree of accuracy. - Convert U.S. customary measurements into larger or smaller units with the help of conversion charts. - Convert linear metric measurements into larger or smaller units with the help of a conversion chart. Students who meet the standard can estimate measurements and determine acceptable levels of accuracy. (Estimation) - Develop and discuss strategies for estimating the perimeters, areas, and volumes of regular and nonregular shapes. ** - Develop and use common referents for volume, weight/mass, capacity, area, and angle measures to make comparisons and estimates. Students who meet the standard can select and use appropriate technology, instruments, and formulas to solve problems, interpret results, and communicate findings. (Progression from selection of appropriate tools and methods to application of measurements to solve problems) - Select and apply appropriate standard units and tools to measure the size of angles.** - Determine the volume of a cube or rectangular prism using concrete materials. - Create an accurate representation of a polygon with a given perimeter or area. Students who meet the standard can describe numerical relationships using variables and patterns. (Representations and algebraic manipulations) - Identify a number pattern, both increasing and decreasing, and extend the number sequence. - Determine the missing number(s) in a complex repeating pattern. - Construct and solve simple number sentences using a symbol for a variable. - Make generalizations given a specific pattern. - Create, describe, and extend patterns. - Describe a pattern with one operation, verbally and symbolically, given a table of input/output numbers. Students who meet the standard can interpret and describe numerical relationships using tables, graphs, and symbols. (Connections of representations including the rate of change) - Create a table that describes a function rule for a single operation. - Demonstrate, in simple situations, how a change in one quantity results in a change in another quantity (e.g., increase the measure of the side of a square and the perimeter increases). - Identify situations with varying rates of change using words, tables, and graphs e.g., growth of a plant). ** Students who meet the standard can solve problems using systems of numbers and their properties. (Problem solving; number systems, systems of equations, inequalities, algebraic functions) - Solve problems with whole numbers using appropriate field properties. Students who meet the standard can use algebraic concepts and procedures to represent and solve problems. (Connection of 8A, 8B, and 8C to solve problems) - Solve one-step linear equations with one missing value in isolation and in problem solving situations. Students who meet the standard can demonstrate and apply geometric concepts involving points, lines, planes, and space. (Properties of single figures, coordinate geometry and constructions) - Identify, draw, and label lines, line segments, rays, parallel lines, intersecting lines, perpendicular lines, acute angles, obtuse angles, right angles, and acute, obtuse, right, scalene, isosceles, and equilateral triangles. - Identify, draw, and build regular, irregular, convex, and concave polygons. - Read and plot ordered pairs of numbers in the positive quadrant of the Cartesian plane. - Describe paths and movement using coordinate systems. - Differentiate between polygons and non-polygons. - Identify and label radius, diameter, chord, and circumference of a circle. - Explore and describe rotational symmetry of two- and three-dimensional shapes.** - Construct a circle with a specified radius or diameter using a compass. Students who meet the standard can identify, describe, classify and compare relationships using points, lines, planes, and solids. (Connections between and among multiple geometric figures) - Determine congruence and similarity of given shapes. ** - Explore polyhedra using concrete models. Students who meet the standard can construct convincing arguments and proofs to solve problems. (Justifications of conjectures and conclusions) - Make and test conjectures about mathematical properties and relationships and justify the conclusions. ** 9D is Not Applicable for Stages A - F. Students who meet the standard can organize, describe and make predictions from existing data. (Data analysis) - Represent data using tables and graphs such as line plots and line graphs. ** - Describe the shape and important features of a set of data and compare related data sets. ** - Arrange given data in order, least to greatest or greatest to least, and determine minimum value, maximum value, range, mode, and median for an odd number of data points. - Compare different representations of the same data and evaluate how well each representation shows important aspects of the data. * - Propose and justify conclusions and predictions that are based on data. ** Students who meet the standard can formulate questions, design data collection methods, gather and analyze data and communicate findings. (Data Collection) - Collect data using observations and experiments. ** - Propose a further investigation to verify or refute a prediction. ** Students who meet the standard can determine, describe and apply the probabilities of events. (Probability including counting techniques) - List all possible outcomes of a single event and tell whether an outcome is certain, impossible, likely, or unlikely. - Describe the probability of an event using terminology such as 5 chances out of 8." * National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va: National Council of Teachers of Mathematics, 2000. ** Adapted from: National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va: National Council of Teachers of Mathematics, 2000.
- What do we learn? - Schooling America and Current Events - Lego Pulley Activity: One of the lifts just broke down in a critical area at the Big Dig. It will take a week to repair the lift, and construction must go on. Bags of concrete that weigh 75 pounds each must be lifted from the ground to a platform 100 feet in the air. One of the engineers on the project suggests that they use pulleys to lift the bags onto the platform. Unfortunately, the only suitable weight for the task is 50 pounds. Sketch out a pulley system that will allow them to lift the 75 pound bags of concrete with 50 pound weights. - Instructions for Assessment: Be fair and consider how the scale (1–7) translates into a grade out of 10 points (which is 1/12 of the final grade). Consider the original assignment...Consider the pulley and gear designs that you worked on over the last week. How did you and the other group members attempt to come up with a solution to the problem? If you were not able to come up with a solution, what prevented you from doing so? If you did come up with a solution, how did you arrive at it? What were the intermediate steps and how did you get from one step to another? Present evidence about what your partners were thinking during this process. You should include your own explanations of your understanding of these systems, and your understanding at the end of the experiment. Reference chapter 2 of How People Learn.
Growing Independence and Fluency Rationale: It is important for children to learn to read fluently and expressively. When children learn to read with expression, they become more confident readers. This lesson will help children learn to read with expression through using whole texts. Materials: chalk, chalkboard, one copy of The Gingerbread Man retold by Jim Aylesworth and ublished by Scholastic 1998, enough age-appropriate books for each child in the class (must be books that can be read with expression), and one sheet of paper for each child 1) Tell the children that there are many things that we can do to become better readers. “One of these things is to read with expression. Can anyone tell me what expression means? That’s right! It means making the way we read more interesting for the people who are listening to us. We can do this by changing how loud or soft our voice is, by changing how fast we read, or by changing the pitch of our voice. Today we are going to practice these different ways of expressing our reading. 2) Ask the children: Has anyone ever heard someone read a story that was exciting because of the way they used expressions? Explain that stories are more exciting when expressions are used. 3) Then take out The Gingerbread Man, and model reading to the children without using expression. Was the story exciting? How can I make it sound better? Let’s make a list on the board of some expressions I could use to make the story more exciting. 4) After completing the list, reread The Gingerbread Man modeling the expressions on the board. Ask the children: Which story was more exciting? 5) Next, divide the children into pairs. Give each group a different age appropriate book. Say: I want each person in the group to read the book I handed you to your partner without using expressions. When you are finished reading make a list of different expressions you should use while reading the book. Then, both of you need to reread the book using the expressions on the list. Remind them if they are having trouble decoding a word that they can cover up part of it and sound it out then cover up the other part of it and sound it. 6) When the groups are all finished, I will have each one come up to the front of the class to read their book using expressions. Then they should show the class the list of expressions they made. “Everyone did a great job reading with expressions.” 7) For assessment, each child will choose an age-appropriate book that they want to read. They will read it and then make a list of expressions (like we did before). When they finish, each child will come up to the teachers desk at different times to read their book with expression. The teacher can then determine if each child is understanding the reading with expression lessonby using a checklist for assessment that specifies: No Expression Some Expression Great Expression. Adams, Beginning to Read; 1990 pp90-92 Click here to return to Illuminations.
Curriculum Terms and Concepts Curriculum Terms and Concepts: Elements of a complete Teaching Guide or Curriculum Plan Familiarize yourself with these elements. Each element is linked to an example of that element in the Sample Teaching Guide The Sample Teaching Guide will open in a new window. You might have to switch to it manually. Open Teaching Guide now. Switch back and forth between the examples and these descriptions. Using the template to create your own guide. - Aim: one sentence (more or less) description of overall purpose of curriculum, including audience and the topic - Rationale: paragraph describing why aim is worth achieving. This section would include assessment of needs. - Goals and objectives: list of the learning outcomes expected from participation in the curriculum. This section includes a discussion of how the curriculum supports national, state, and local standards. - Audience and pre-requisites: describes who the curriculum is for and the prior knowledge, skills, and attitudes of those learners likely to be successful with the curriculum. - Description of subject-matter: designation of what area of content, facts, arena of endeavor, that the curriculum deals with. (This is a further elaboration of the "topic" description in the Aim.) - Instructional plan: describes the activities the learners are going to engage in, and the sequence of those activities. Also describes what the TEACHER is to do in order to facilitate those activities. (This is like the traditional "lesson plan" except for a curriculum it may include more than one lesson.) - Materials: lists materials necessary for successful teaching of the curriculum. Includes a list of web pages. Often, the web site will NOT be the only materials needed by the students. They may need books, tables, paper, chalkboards, calculators, and other tools. You should spell these additional materials out in your teaching guide. Also includes the actual materials (worksheets and web pages) prepared by the curriculum developer, any special requirements for classroom setup and supplies, and a list of any specific hardware and software - Plans for assessment and evaluation: includes plan for assessing learning and evaluating the curriculum as a whole. May include description of a model project, sample exam questions, or other elements of assessment. Also should include plan for evaluating the curriculum as a whole, including feedback from learners.
Note to teachers The overall goal in this exhibit is to give students and teachers a glimpse into a moment of discovery. The discovery of optical pulsars may be the only example of a significant discovery documented by a tape recorder left running for other purposes. The listener is privileged to hear an event as it happened, not the staging of an event. This exhibit allows students and teachers to recognize scientists as people. The thrill of discovery provides a human element to which everyone can relate. As Cocke and Disney check their results and share their excitement, they are people engrossed in science, not humorless, unfeeling machines recording data. This exhibit also provides a compact view of the scientific process in operation. Experimental science is presented in a true context. The astrophysicists are constantly observing, manipulating equipment, and then repeating the observation in order to be certain of their results. Mathematics, data and instruments are all checked. Only after they have tried every way they can think of to make their discovery "go away," and still find it staring at them, are the scientists satisfied. This exhibit can also be an opportunity for professional development. Science teachers can strengthen their background in pulsars and neutron stars -- one of the most fascinating new fields of astronomy -- through self-study of the module and linked Web sites. Teachers can better understand the struggle of scientists to understand the nature of this interaction as they listen to the scientists themselves describe their involvement. - In an astronomy course: The unit can be used during a discussion of stellar evolution or at any other point where the topic of pulsars is specifically addressed. - In a physics course: The unit can be used in connection with discussions of gravity, density of matter, conservation of angular momentum, or especially when discussing the different forms of electromagnetic radiation. It may also be used as a brief excursion into scientific discovery or scientific method. - In a history or philosophy of science course: This unit can be used as a true account of one type of discovery in science. We need your feedback so we can do more exhibits like this! Both our funding and our enthusiasm could falter if we don't hear from users. Please e-mail us or use the online form to tell us how useful this was to you (a brief word is great, comments and suggestions better still).
When it comes to geography, we can identify maps using the following 5 best points; Location :- Where are things located? A location can be specific for example, it can be stated as coordinates of longitude and latitude or as a distance from one place to another place or general (it's in the Northeast). The Latitudes are imaginary lines running parallel to the Equator, and the Longitudes are imaginary lines running parallel to the Prime Meridian. We will learn in detail about the Latitudes and Longitudes, and the Equator and the Prime Meridian in the mini lessons section. Place :- What makes a place different from other places? Differences might be defined in terms of climate, physical features, or the people who live there and their Human-environment interaction :- What are the relationships among people and places? How have people changed the environment to better suit their needs? Movement :- What are the patterns of movement of people, products, and information? A study of movement includes learning about major modes of transportation used by people, an area's major exports and imports, and ways in which people communicate or move ideas. Regions :- How can Earth be divided into regions for study? Regions can be defined by a number of characteristics including area, language, political divisions, religions, and vegetation for example, grassland, marshland, desert, rain forest. Most of the mappings do involve the business, the trade, the geography and political boundaries such as the neighboring countries, the rivers, the mountains, and the landmarks nearer to a particular selected area. Wherever you are supposed to figure out the important landmarks of your interested place you can easily find out its sources with the help of a geographical map. If you know how to locate a map of a region, you can easily get all the geographical and natural resources information, of that particular Caleef and Vornya: The "official" theory is that Hernan Cortes named California, after an imaginary island in a popular Spanish novel of the time. We have heard through the years that the name of California came from the two Spanish words that mean 'hot furnace': 'Caliente Fornace' Is this true? A myth about our 'California' state: The name California was derived from 'Caliente and Fornace.' About 25 years ago, a historian noted that it has reference to the Amazon women who inhabited the island. He pointed that the name was originally derived from the natives of the West Indies who used two words that sounded like 'Caleef and vornya', which meant 'to give off oneself before reaping from the land.' The myth was that those who try acquiring from the land before showing their talents would be baked to death by the sun. It is interesting not only how the Spanish words for hot furnace figure into this but that many who came looking for gold died in the hot sun as they were trying to leave. Furthermore, California and Florida have the highest percentage of skin cancer in the world that is attributed directly to the
Adolescence is the stage between childhood and adulthood. Puberty marks the beginning of adolescence and is the period of time when males and females become physically able to reproduce. Secondary sex characteristics begin to appear, such as breasts in the female and body hair and muscles in the male. Physical changes mark the beginning of puberty. Until puberty, males and females are about the same size and shape. As they enter puberty, there is a great variation in size. It is important to remember that these differences are perfectly normal. Puberty is the result of hormones, chemicals released into the bloodstream to travel to other organs and cause growth. The male hormone is testosterone, and the female hormones are estrogen and progesterone. During adolescence, the brain matures and reaches its full size. The individual’s capacity to think and reason are increased. Also, memory and memory spans increase. Emotional changes are even more uneven. Many teens describe puberty as being on a roller-coaster of emotions. They can be happy one moment, sad the next. The extra surges of energy and emotion are caused by changes taking place in the body. Social changes are even more important. This is the development between the adolescents and their peers. Peer acceptance becomes very important. Forming healthy relationships is part of social change. This is different for every person based on their personal identity and self-concept.
WHAT do you call a fish without an eye? Clue: the answer isn’t “fsh”. At least, not in the darkest depths of the UCL Anatomy building, where the Zebrafish Research groups keep many thousands of these tiny striped fish. Aside from fish carrying mutations that lead to their offspring missing part of their eyes or their brains, there are transparent fish as well as the somewhat more attractive glow-in-the-dark fish. Anxious to find out what could possibly be the scientific value of fluorescent fish, I attended an OpenLabs session hosted by Dr Matina Tsalavouta in Steve Wilson’s Research Group in conjunction with the Synthetic Biology Society. Prof Wilson’s research group is composed of three subgroups, one studying brain asymmetry, one studying eye development, and one developing an atlas of the zebrafish brain. Zebrafish have been a big hit with researchers for several reasons: they’re small, hardy, and rapidly reach maturity. They also have the amazing ability to regrow their own heart, fins, and skin, which makes them of particular interest to stem cell and regeneration projects. Zebrafish are extremely easy to breed, as their breeding behaviour is regulated by exposure to light, meaning you simply need to switch off the lights for a few hours, and when the lights come on again you get a baby zebrafish. There is a slight hitch, however – zebrafish will eat their own eggs if left alone with them, so they have to be kept in special breeding boxes which keep them separated from their eggs. But how do you make fish glow in the dark? The green fluorescent protein (GFP) gene – termed a “glow in the dark gene” – can literally highlight areas of interest within the fish. The gene construct can be injected into the fish embryo at the single-cell stage. As the embryo grows, the GFP construct becomes incorporated into their genome, and as a result you get a fluorescent fish. Researchers are then able to take high-quality and often strangely beautiful images of, for example, the fish brain. Transgenic fish with other types of fluorescent protein can be used to visualise brain activity, allowing scientists to see when and where a set of neurons is active under different experimental conditions. They can also track which parts of the brain are activated in response to certain stimuli when the fish are developing normally, or when certain genes do not function or tissues are compromised. The use of GFP is incredibly versatile; it can also be inserted into individual brain cells, allowing researchers to detect the connectivity of a single neuron within the brain. Other cell types can also be infiltrated such as macrophages, a special type of white blood cells involved in the immune response. Dr Tsalavouta showed us a video, taken in another lab, of a single glowing macrophage rushing to the site of injury in a zebrafish body. Whilst this may look like something from an avant-garde film, this kind of imaging is invaluable to researchers, providing a clarity which would not be possible without the use of fluorescent proteins. And it turns out zebrafish aren’t just popular at UCL – they are used in labs across the world, including those of the British Heart Foundation, which last year began a project investigating the zebrafish’s ability to regrow its own tissue from stem cells. Given that zebrafish have been used as a model organism for over 30 years, it is perhaps surprising that we actually know relatively little about the detailed neuroanatomy, the connectivity and circuitry formation of their brain. Dr Tsalavouta told us about the work of another group within Prof Wilson’s laboratory which is developing an online, high-resolution atlas of the developing zebrafish brain, which aims to serve as a tool for research in labs across the world. Other recent zebrafish studies at UCL have revealed how our eyes take shape, and how the tissues of the eye regulate their growth, helping us to better understand the mechanisms behind tumour growth. As an undergraduate it’s all too easy to forget that UCL is a world-leading research centre, and I came away from the hour-long OpenLab session happy to have gained some insight into the world of cutting-edge research that takes place under our very noses. Synthetic Biology Society would like to thank Dr Steve Wilson and Dr Matina Tsalavouta for making this OpenLabs event possible. To find out more about the next OpenLabs series, head to synbiosoc.org.
In Johnson v. McIntosh (1823), the Court resolved the question of territorial possession and the theory under which it worked. That decision continued the long-practiced European tradition of claiming lands by right of discovery as long as they were unoccupied by Christian peoples. When the Spanish conquered the Americas in the 1500s, Spanish theologians argued before the Spanish Court the legality and morality of Spanish intrusion into the Americas. The decision eventually reached was that Europeans had the right to intrude peacefully into indigenous lands, but that Native peoples retained rights of occupation of those lands. Marshall adopted that theory but amended it with his own thinking, asserting that U.S. title to lands claimed in North America descended directly from English, French, and Spanish titles based on right of prior discovery and settlement. But the United States obtained only the exclusive right to extinguish Native title to lands. The U.S. title to the lands, then, depended on extinguishing Native rights in the soil. Indian rights, Marshall asserted, were not extinguished by European discovery, but merely "impaired." The consequences of this decision were to diminish Native groups' vested rights in their lands in exchange for recognition of some sort of political sovereignty, undefined legally or constitutionally by that decision. The other two important decisions, Cherokee Nation v. Georgia (1831) and Worcester v. Georgia (1832), resolved the question of sovereignty, at least from the U.S. perspective, although Native peoples may have had other opinions. The two cases arose over removal of all Native groups from east of the Mississippi River to the region known as Indian Territory or into the Great Plains generally. In Johnson v. McIntosh (1823), Marshall had left open the question of the relationship between Indigenous tribes and the U.S. government. In Cherokee Nation v. Georgia (1831), he resolved that question. The state of Georgia had in the late 1820s enacted a series of laws extending its authority over the Cherokee Nation's reservation in the northwestern corner of the state. State officials used violence to force Cherokee leaders to submit to their authority. The Cherokee resorted to American law to challenge the issue. In 1830 the Cherokee brought suit directly in the U.S. Supreme Court demanding that as a foreign power their rights be protected by the federal government from state encroachment. For a number of reasons, the Supreme Court denied the writ the Cherokee sought. Marshall stated that the Court did not have jurisdiction, because the Constitution gives the Court original jurisdiction in only two types of cases. The Cherokee had sued under one of those types: the right of a foreign government to sue a state government for infringement of its rights or powers. Marshall denied that the Cherokee Nation was a foreign power even though British colonial policy had operated as if Native peoples of eastern North America were separate, foreign powers. The United States, as an infant nation after 1783, had continued that tradition, behaving as if indigenous nations were foreign powers to be dealt with formally through diplomatic treaties. The Constitution made no clear statement about this question, so the Court had the opportunity at this time to resolve the issue. Relying on his own nationalism and the McIntosh decision, which suggested a different relationship, Marshall declared the Cherokee Nation—and, by extension, all other Native nations—to be "domestic dependent nations," choosing the analogy of "a ward to his guardian" to describe the relationship between indigenous groups and the U.S. government—a relationship on which most future federal government responsibilities for Native affairs rested. Because the Cherokee were not in the Court's eyes a foreign power, they could not sue under the original jurisdiction clause describing the Supreme Court's powers in the Constitution. The Worcester v. Georgia (1832) decision, however, did support the Cherokee; but the decision arose from Marshall's sense of nationalism and his concept of the power of the federal government, not from his perception of rights or justice inherent in the Native position. In Worcester, the Court upheld a challenge to Georgia's authority. Rev. Samuel Worcester and several other missionaries had gone by Cherokee invitation onto the Cherokee reservation, ignoring Georgia's prohibition of any whites entering the reservation without express permission from state authorities. The missionaries were arrested, tried, and sentenced to four years' hard labor. They appealed in late 1831 to the Supreme Court, which rendered its decision in early 1832. The Court held that Georgia had no right to interfere with what were purely the domestic affairs of the Cherokee. Because the Constitution seemed to say that all relations between Natives and whites rested in the hands of the federal government, Georgia had no right to interfere with Cherokee policy. The Court struck down the state's conviction of Worcester and the other missionaries; but the president of the United States, Andrew Jackson, a strong believer in the removal of Indians, refused to execute the Court's decision. Thus it became meaningless. However, the decision extended to all laws over the Cherokee Nation that Georgia had passed, finding them unconstitutional. By 1835 the U.S. government had forcibly removed the Cherokee to Indian Territory, although a small portion of the population found refuge on a farm in western North Carolina where they remain today, forming the Eastern Cherokee reservation. The cases in the Marshall Trilogy form the foundation for federal government-Native relationships. As such, they are quite important to the civil, political, legal, and constitutional rights of all Native groups today. Each party, Native and government, today continues to use the often contradictory ideas in the decisions to suit its own purposes. Baker, Leonard. John Marshall: A Life in Law. New York: Collier Books, 1981; Francis N. Stites. John Marshall: Defender of the Constitution. Boston: Little, Brown, 1981; G. Edward White, The Marshall Court and Cultural Change. New York: Oxford University Press, 1991; Gibson, Arrell Morgan, The American Indian: Prehistory to the Present Lexington, Mass.: D. C. Heath and Company, 1980).
Our Government is formed from a democratically elected House of Representatives. The Government advises the Sovereign (our head of State). By convention, the Sovereign, the source of all executive legal authority in New Zealand, acts on the advice of the Government in all but the most exceptional circumstances. This system is known as a constitutional monarchy. Our system is based on the principle that power is distributed across three branches of government — Parliament, the Executive, and the Judiciary. Parliament makes the law. The Executive (Ministers of the Crown also known as the Government) administers the law. The Judiciary interprets the law through the courts. Head of State New Zealand’s head of State is the Sovereign, Queen Elizabeth II of New Zealand. The Governor-General is the Queen’s representative in New Zealand. New Zealand has no single written constitution or any form of law that is higher than laws passed in Parliament. The rules about how our system of government works are contained in a number of Acts of Parliament, documents issued under the authority of the Queen, relevant English and United Kingdom Acts of Parliament, decisions of the court, and unwritten constitutional conventions. New Zealand’s Parliament consists of the Sovereign and the House of Representatives. The Sovereign’s role in Parliament includes opening and dissolving Parliament, and giving the Royal assent to bills passed in the House of Representatives. New Zealand’s Parliament is unicameral. This means it has only one chamber (the House of Representatives) and there is no upper house such as a senate. The House of Representatives consists of members of Parliament who are elected as the people’s representatives for a term of up to 3 years. The usual number of members of Parliament is 120, but there are electoral circumstances when this could vary. ‘Responsible government’ is the term used to describe a system where the Government is formed by appointing Ministers who must first be elected members of Parliament. It means that in New Zealand the Government can stay in power only while it has the support (‘confidence’) of the majority of House of Representatives. This support can be tested in a confidence vote, such as passing the Budget. Ministers are responsible to Parliament, both collectively for the overall performance of the Government, and individually for the performance of their portfolios. Proportional representation electoral system New Zealand’s House of Representatives is elected using the mixed member proportional representation (MMP) voting system. Each elector has two votes — one for a local member of Parliament and one for a preferred political party. Political parties are represented in Parliament in proportion to the share of votes each party won in the party vote in the general election.
CONCEPT: Work involves the movement of objects through a distance. CONTENT OBJECTIVE: 1A2.00 To understand work and explain the kinds of motion used in doing work INSTRUCTIONAL OBJECTIVES: The learner will: 2.01 observe and describe examples of work. 2.02 describe the relationship of work to force. 2.03 classify forces as pushing, pulling or lifting. OUTLINE OF CONTENT: TN COMPONENT OF SCIENCE: Unifying Concepts of Science To enable students to acquire scientific knowledge by applying concepts, theories, principles and laws from life/environmental, physical, and earth/space science. 2.4 INTERACTIONS - At all levels of living and non-living systems, matter and energy act and react to determine the nature of our environment. TN STANDARD(S): The learner will understand that: 2.4a Interactions occur on scales ranging from elementary particles to galaxies. BENCHMARK Sometimes changing one thing may cause changes in something else. If changes occur in the same manner, similar results may be expected. Scissors, magazines for pictures, colors, posters about work What type of work have you done today? (pause) Did you need energy to do your work? (pause) Today we are going to study how work is done. (Discuss the terms WORK, FORCE, and ENERGY. Have the students describe these terms in their own words.) Work is the amount of energy used to move an object. (Have two students demonstrate this definition of work by moving objects.) Force is a push or pull. Force is needed to move an object and to do work. Force is anything that causes something to move. (Show the students pictures of people doing different kinds of work. Have each student to select one picture and describe the work used in the picture.) (Make cards with different kinds of work. Put the cards in a box and have the children draw out a card. Each child can select another child to help him act out the type of work. Examples: one picks up a pencil, rolls a pencil across the desk, or stands up. Do the hands or legs push, pull or lift?) What did we discuss today? (response) (Yes, we discussed work, energy and force) Now class, who can tell me what these terms mean? (Work is the amount of energy used to move an object. Force is a push or pull. Energy gives us power to move an object.) Turn to your neighbor and tell him/her one kind of work you can do. (pause, then summarize) (Have the children draw a picture and cut out a picture showing work. Each student will show his/her picture and describe the work being done.) This is the time this file has been accessed since 04/10/97. The University of Tennessee at Martin is not responsible for the information or views expressed here.
Have Nasa scientists found 'trees' on Mars? At first glance this hilly desert landscape appears to show islands of trees casting shadows on reddened soil. But this strange winter wonderland of dusty dunes, icy rivulets and dark outcrops lies 62 million miles away on Mars. And the 'trees,' pictured by a Nasa probe, are actually trails dislodged sand. In winter a layer of carbon dioxide ice covers the dunes but this evaporates in spring causing dark material to streak down the slopes. Sand dislodged from the crests of the Martian dunes cascades down, forming dark streaks A small plume of dust is even visible to the centre left of the image, which was kicked up by the falling debris. Candy Hansen from Nasa added: 'The colour of the ice surrounding adjacent streaks of material suggests that dust has settled on the ice at the bottom after similar events.' The probe was studying the vast region of sand dunes in the high northern latitudes on Mars. This image has a resolution of 15inches per pixel. The picture was taken by the powerful HiRISE camera on board the Mars Reconnaissance Orbiter - a probe that has been circling the Red Planet since 2006. The orbiter was commissioned to search for evidence that water persisted on the surface of Mars for a substantial period of time - long enough to provide a habitat for life. For that reason it has focused much of its attention on the cold polar regions. Mars, like Earth, is believed to have experienced global climate changes over the past few million years. The layered deposits in the polar regions of Mars should hopefully have recorded these changes over millions of years. Changes in the tilt (relative to the Sun) of the rotation axes of both planets are thought to have influenced their climates, but these changes were larger in the Martian case. For this reason and because of the apparent lack of recent oceans and life on Mars, it should be simpler to determine the causes and history of climate changes on Mars. So HiRISE is returning images of the polar layered deposits on Mars that have the potential to help to unravel Mars’ climate history. The orbiter is also used to analyse more active regions. Nasa released a stunning image yesterday that showed the youngest flood lava on Mars. The region is in Athabasca Valles, in the Elysium Planitia region of equatorial Mars. Although the dramatic image resembles the patterns made by an oil slick, the colour variations are from the bright layered deposits on the plateau. Scientists believe this region of equatorial Mars was once very active Researchers have found the deposits contain opaline silica and iron sulfates, which you would expect to find if acidic water once flowed over basaltic materials in low temperatures. Scientists believe water activity affected this plateau after the formation of the nearby canyons. Although the source of water and sediment remains uncertain.
(1792-1822) Although he died before he was 30, the English lyric poet Percy Bysshe Shelley created masterpieces of Romantic poetry. Among them are such lyrics as 'The Cloud', 'To a Skylark', and 'Ode to the West Wind'. The critic Matthew Arnold characterized Shelley the reformer as "a beautiful and ineffectual angel, beating in the void his luminous wings in vain." His poetry became the vehicle for his idealism. William Godwin's book, 'An Enquiry Concerning Political Justice' profoundly affected Shelley while he was at school. The ideas inspired in Shelley by the book were expressed in his poem 'Queen Mab', written in 1813, and in his desire to effect reform in the world. In 1818 Shelley left England for Italy. There he wrote the works that represent his greatest achievement as a poet. His long tragedy 'The Cenci' (1819); the lyrical drama 'Prometheus Unbound' (1820); and his elegy on the death of John Keats, 'Adonais', were all written during this period. Visit http://www.kirjasto.sci.fi/pshelley.htm to learn more....
An international team of astronomers has used nearly three years of high precision data from NASA's Kepler spacecraft to make the first observations of a planet outside our solar system that’s smaller than Mercury, the smallest planet orbiting our sun. The planet is about the size of the Earth's moon. It is one of three planets orbiting a star designated Kepler-37 in the Cygnus-Lyra region of the Milky Way. "Owing to its extremely small size, similar to that of the Earth’s moon, and highly irradiated surface, Kepler-37b is very likely a rocky planet with no atmosphere or water, similar to Mercury," the astronomers wrote in a summary of their findings. "The detection of such a small planet shows for the first time that stellar systems host planets much smaller as well as much larger than anything we see in our own Solar System." The planet orbits too close to its sun-like star and thus is too hot to support life--its surface temperature is an estimated 700 degrees Fahrenheit. It also lacks an atmosphere and water on its rocky surface. It's been almost 20 years since the first planet was found outside our solar system. Since then, thanks to Kepler (which was launched in 2009), astronomers have discovered more and more of them: over 800 so far. On the ABC News website, Alicia Chang quotes astronomer Geoff Marcy as saying that the latest find is "absolutely mind-boggling. This new discovery raises the specter that the universe is jampacked, like jelly beans in a jar, with planets even smaller than Earth." Meanwhile, giant telescopes like the one now being built in Chile could hunt for alien life by detecting oxygen on exoplanets. On Earth, plants and some bacteria are the only sources of large amounts of atmospheric oxygen. Finding oxygen on an exoplanet would therefore signal the possibility of life as we know it. What WE want to know is, have any Visitors from other worlds (NOTE: Subscribers can still listen to this show) come our way? We're not sure that the Visitors are from another planet (they may be from a parallel universe), but we know they're HERE because Anne Strieber has interviewed a large group of "contactees" (in a totally unique repository of information) who have told her about their experiences IN THEIR OWN WORDS. If you subscribe today, you can listen to all of these fascinating conversations! And you're going to make LOTS of new discoveries at our incredible Nashville Symposium in May. Better get your tickets now, before we sell out: click here.
From "silent geometry" to logic puzzles, from tangrams to myths, legends and folktales, Rosa Lee Carter Elementary students will once again be using many different thinking skills to stretch their brains and achieve higher levels of thinking. In order to help students achieve these goals, the SEARCH curriculum spirals developmentally through five components: Perceiving, Reasoning, Connecting, Creating, and Evaluating. Each grade level learns about each component at increasingly more complex and abstract levels. These components are presented to students in grades K-3 as keys to higher level thinking. Ask your child about the SEARCH lessons and the keys that unlock his or her power thinking.
Skip to Navigation ↓ An important part of astronomy is knowing where to point a telescope and keeping track of the positions of objects in the sky. To to this, astronomers use angular measure. An angle is the opening between two lines that meet at a point and angular measure describes the size of an angle in degrees, designated by the symbol °. A full circle is divided into 360° and a right angle measures 90°. One degree can be divided into 60 arcminutes (abbreviated 60 arcmin or 60'). An arcminute can also be divided into 60 arcseconds (abbreviated 60 arcsec or 60"). Astronomers use angular measure to describe the apparent size of an object. The angle covered by the diameter of the full moon is about 31 arcmin or 1/2°, so astronomers would say the Moon's angular diameter is 31 arcmin, or the Moon subtends an angle of 31 arcmin. If you extend your hand to arm's length, you can use your fingers to estimate angular distances and sizes in the sky. Your index finger is about 1° and the distance across your palm is about 10°. The angular sizes of objects show how much of the sky an object appears to cover. Angular size does not, however, say anything about the actual size of an object. If you extend your arm while looking at the full moon, you can completely cover the moon with your thumb, but of course, the moon is much larger than your thumb, it only appears smaller because if its distance. How large an object appears depends not only on its size, but also on its distance. The apparent size, the actual size of an object, the distance to the object can be related by the small angle formula: D = θ d / 206,265 D = linear size of an object θ = angular size of the object, in arcsec d = distance to the object A certain telescope on Earth can see details as small as 2 arcsec. What is the greatest distance you could see details as small the the height of a typical person (1.6 m)? d = 206,265 D / θ = 206,265 × 1.6 m / 2 = 165,012 m = 165.012 km This is much less than the distance to the Moon (approximately 384,000 km) so this telescope would not be able to see an astronaut walking on the moon. (In fact, no Earth based telescope could.) Some calculations to try: 1. The average distance to the Moon is approximately 384,000 km. The Moon subtends and angle of 31 arcsminutes, or about 1/2°. Use this information and the small-angle formula to find the diameter of the moon in kilometers. 2. At what distance would you have to hold a quarter (which has a diameter of about 2.5 cm) for it to subtend and angle of 1°? 1. The diameter of the Moon is about 3,463 km 2. You would have to hold it at a distance of 1.43 meters.
When a heavy atom (such as uranium or plutonium) undergoes fission, it splits into two lighter fission products. This splitting process also yields two or three neutrons, which can cause other heavy atoms to fission, as well as a huge amount of energy, which nuclear engineers convert into electric power. The two fission product atoms are not the same two atoms every time. Nuclear scientists can predict the distribution of fission products through physical models, but generally this is measured experimentally to ensure accuracy. When fission products are first produced, they are highly unstable and rapidly decay (usually β- decay) multiple times until they become relatively stable nuclides with long half-lives. All this decaying generates quite a bit of energy, which we call decay heat. So even after the fission reaction completely stops (which it did immediately following the earthquake), fission products continue to produce energy for a long period of time. This energy is large enough to melt the fuel if the fuel is not cooled, and cooling the fuel has been what the reactor operators at Fukushima have been struggling with for several days. The fission reaction was never out of control – only the decay heat cooling systems were out of control. Although most fission products are considered waste, some are very important to the operation of a nuclear reactor and have specific uses. Two of the fission products, xenon-135 and samarium-149, are prolific neutron absorbers (called “neutron poisons”) and can substantially affect control of the fission reaction during normal operation. Others, especially molybdenum-99 which eventually decays to technetium-99m, are used to produce “medical isotopes” that are essential for diagnostic testing for numerous life-threatening illnesses. Each year, 40 million people worldwide undergo necessary testing with technetium-99m. If you’ve ever had a nuclear medicine procedure, the chances are high that what they put into your body came straight out of a nuclear reactor – and if they hadn’t put it into your body, it would have been considered nuclear waste! Fission products remain inside the fuel under normal circumstances. When fuel resides in the core, it contains an amount of fission products proportional to the total energy it generated. When the fuel is depleted, it is moved to spent fuel pools and ultimately to dry cask storage, long-term repositories, or reprocessing facilities. At the Fukushima nuclear power plants, fuel inside the core (and possibly the spent fuel pools) is suspected to have likely been damaged. Because of this, some fission products, especially the gaseous products, have likely been released. We do not currently have enough information to know exactly which (or in what amount) fission products have been released. Not all fission products are harmful. Although a few are gaseous, which enables them to travel long distances through the atmosphere, most are not highly mobile and will thus remain localized near the reactor site. Although nearly all fission products emit radiation, only some are potentially harmful to humans. The chart below lists various important fission products along with their yields – the frequency at which they are produced from fission. For example, 6.3% of fission events (on average) will produce xenon-135 (after the highly unstable fission products rapidly decay). The half-life is a general time scale for how long the listed radioactive fission product will exist before decaying to a more stable fission product. Note that cesium and iodine, which were detected near the Fukushima site, are by far the most frequently occurring radioactive fission product elements. |6.3%||iodine-135 / xenon-135||7 hours| |6.3%||zirconium-93||1.5 million years| |6.1%||molybdenum-99 / technetium-99**||200,000 years| |0.7%||iodine-129||15 million years| |0.2%||palladium-107||7 million years| *Cs-133 is stable but has a high fission yield, but it will then produce Cs-134 from absorbing neutrons in the reactor and Cs-134 is radioactive with a ~2 year half-life. **Half-life reported in the table is for Tc-99. Mo-99 has a half-life of ~66 hours, which then decays to Tc-99m (metastable form of Tc-99) with a half-life of ~6 hours. The Tc-99m then decays to the Tc-99 with the 200,000 year half-life reported in the table. Note that longer half-lives do not necessarily mean more danger. Some fission products have extremely long half-lives but emit very little radiation at any given time, so they are not dangerous. Other fission products emit huge amounts of radiation but exist for such a short period of time that they are not dangerous. How harmful a given fission product is to humans is a complicated function of half-life, radiation intensity, and various human biology factors.
From Ohio History Central War History Commission General Files Throughout much of Ohio's history, women have not had the same rights that men have had. Even after women gained the right to vote with the passage of the Nineteenth Amendment to the United States Constitution, women were still not treated equally when it came to economic, legal, and social rights. Although laws were beginning to change, many people's attitudes were not changing. One example of the ongoing debate about women's rights was the Dunn Bill. Pat Dunn, a state representative from Stark County, introduced the Dunn Bill to the Ohio legislature in 1939. Also known as House Bill 26, the Dunn Bill would have prohibited the state government from employing married women. Many Ohioans believed that, if married women worked outside of their homes, their husbands and children would suffer. They argued that, if married women worked, it could lead to the destruction of the home and traditional family values. The Dunn Bill had significant support within the state legislature, but a number of women's organizations in Ohio opposed it. Members of Ohio branches of the National Women's Party, the League of Women Voters, and the Ohio Federation of Business and Professional Women's Clubs testified against the Dunn Bill. Ultimately, the bill was not passed into law. Two years later, Dunn once again introduced a version of the bill to the Ohio House of Representatives, but the need for wartime labor during World War II meant that there was even less support for his ideas than before.
Act: Legislation that has passed both Houses of Congress and has been either approved by the President, or passed over his veto, thus becoming law. Also used technically for a bill that has been passed by one House of Congress. Alien: A person residing under a government or in a country other than that of one's birth without being a citizen of that non-native country. Amendment: A proposal by a Member (in committee or floor session of the respective Chamber) to alter the language or provisions of a bill or act. It is voted on in the same manner as a bill. The Constitution of the United States, as provided in Article 5, may be amended when two thirds of each house of Congress approves a proposed amendment and three fourths of the states thereafter ratify it. Anti-Federalists: Opponents to the adoption of the federal Constitution. Leading Anti-Federalists included George Mason, Elbridge Gerry, Patrick Henry, and George Clinton. Autonomy: Independence or freedom; the right of self-government. Bill: Formally introduced legislation. Most legislative proposals are in the form of bills and are designated as H.R. (House of Representatives) or S. (Senate), depending on the House in which they originate, and are numbered consecutively in the order in which they are introduced during each Congress. Public bills deal with general questions and become Public Laws, or Acts, if approved by Congress and signed by the President. Private bills deal with individual matters such as claims against the Federal Government, immigration and naturalization cases, land titles, et cetera, and become private laws if approved and signed. Bicameral: The characteristic of having two branches, chambers, or houses, such as the United States Congress which is composed of the Senate and the House of Representatives. Bill of Rights: The first ten amendments to the United States Constitution. Calendar: A list of bills, resolutions, or other matters to be considered before committees or on the floor of either House of Congress. Centralized Government: A form of government in which the national government maintains the power. Checks and Balances: A system of limits imposed by the Constitution of the United States on all branches of a government by vesting in each branch the right to amend or void those acts of another that fall within its jurisdiction. Citizen: A native or naturalized member of a state or nation who owes allegiance to its government and is entitled to its protection. Cohesive: The state of uniting or sticking together. Commerce: The traffic in goods, usually thought of as trade between states or nations. Concurrent Powers: Duties shared by both the national government and state governments, such as collecting taxes, building roads, and making/enforcing laws. Confirmation: Action by the Senate approving Presidential nominees for the executive branch, regulatory commissions, and certain other positions. Decennial: Occurring every ten years. Delegate: A person designated to act for or represent another or others; a deputy; representative, as in a political convention. Democratic: Characterized by the principle of political or social equality for all. Dual Federalism: A system of government where the states governed the people directly and the national government concerned itself with issues relating to foreign affairs. Elastic Clause: A statement in the U.S. Constitution granting Congress the power to pass all laws necessary and proper for carrying out the list of powers it was granted. Enrolled Bill: Legislation that has been passed by both houses of Congress, signed by their presiding officers, and sent to the President for signature. Federal: A union of groups or states in which each member agrees to give up some of its governmental power in certain specified areas to a central authority. Federalism: A union of states in which sovereignty is divided between a central authority and the member state authorities. Federalists: A group of people who supported the adoption of the Constitution. Leading Federalists included Alexander Hamilton, James Madison, and John Jay. Fiscal Year: A twelve month accounting period used by the Federal Government that goes from October 1st to September 30th. Currently, the Government is in FY07, which goes from October 1, 2006 to September 30, 2007.C Gerrymandering: Drawing of district lines to maximize the electoral advantage of a political party or faction. The term was first used in 1812, when Elbridge Gerry was Governor of Massachusetts, to characterize the State redistricting plan. Hearing: A meeting or session of a committee of Congress, usually open to the public, to obtain information and opinions on proposed legislation, conduct an investigation, or oversee a program. Hopper: A box into which a proposed legislative bill is dropped and thereby officially introduced. Immigrant: A person who migrates to another country, usually for permanent residence. Impeachment: A formal accusation issued by a legislature against a public official charged with crime or other serious misconduct. Independent: When a person or thing is not influenced or controlled by others in matters of opinion, conduct, etc.; thinking or acting for oneself. Indirect popular election: Instead of voting for a specific candidate, voters select a panel of individuals pledged to vote for a specific candidate. This is in contrast to a popular election where votes are cast for an individual candidate. For example, in a general presidential election, voters select electors to represent their vote in the Electoral College, and not for an individual presidential candidate. Initiative: A procedure by which a specified number of voters may propose a statute, constitutional amendment, or ordinance, and compel a popular vote on its adoption. Judicial Review: The power of a court to judge the constitutionality of the laws of a government or the acts of a government official. Law: A rule of conduct established and enforced by the authority, legislation, or custom of a given community, state, or nation. Legislative Day: A formal meeting of a House of Congress which begins with the call to order and opening of business and ends with adjournment. A legislative day may cover a period of several calendar days, with the House recessing at the end of each calendar day, rather than adjourning. Line-Item Veto: The power of the executive to disapprove of particular items of a bill without having to disapprove of the entire bill. National: A person under the protection of a specific country. A citizen or subject. Naturalization: The official act by which a person is made a national of a country other than his native one. Pocket Veto: The disapproval of a bill brought about by an indirect rejection by the president. The president is granted ten days, Sundays excepted, to review a piece of legislation passed by Congress. Should he fail to sign a piece of legislation and Congress has adjourned within those ten days, the bill is automatically killed. The process of indirect rejection is known as a pocket veto. Primary Election: An election held to decide which candidates will be on the November general election ballot. Public Law: A bill or joint resolution (other than for amendments to the Constitution) passed by both Houses of Congress and approved by the President. Bills and joint resolutions vetoed by the President, but then overridden by the Congress also become public law. Ratification: Two uses of this term are: (1) the act of approval of a proposed constitutional amendment by the legislatures of the States; (2) the Senate process of advice and consent to treaties negotiated by the President. Reapportionment: The process by which seats in the House of Representatives are reassigned among the States to reflect population changes following the decennial census. Redistricting: The process within the States of redrawing legislative district boundaries to reflect population changes following the decennial census. Referendum: The submission of a law, proposed or already in effect, to a direct vote of the people. Report: The printed record of a committees actions, including its votes, recommendations, and views on a bill or question of public policy or its findings and conclusions based on oversight inquiry, investigation, or other study. Republic: A state or nation in which the supreme power rests in all the citizens entitled to vote. This power is exercised by representatives elected, directly or indirectly, by them and responsible to them. Separation of Powers: The distribution of power and authority among the legislative, executive, and judicial branches of the government. Sovereign: Above or superior to all others; chief; greatest; supreme dominion or power. Tabling Motion: A motion to stop action on a pending proposal and to lay it aside indefinitely. When the Senate or House agrees to a tabling motion, the measure which has been tabled is effectively defeated. Veto: The constitutional procedure by which the President refuses to approve a bill or joint resolution and thus prevents its enactment into law. A regular veto occurs when the President returns the legislation to the originating House without approval. It can be overridden only by a two-thirds vote in each House. A pocket veto occurs after Congress has adjourned and is unable to override the Presidents action. Vote -- The Responsibility, Duty and Honor of Every American Citizen. A formal expression of opinion or choice, either positive or negative, made by an individual or body of individuals. To enact, establish, or determine by vote: to vote a proposed bill into law. V O T I -- Vote Out The Incumbent. This is the power of We the People to ensure that politicians who do not work for us do not stay in government office past one term. Some of our government seats have Term Limits, unfortunately other positions do not. V O T I gives the People the power to correct corruption. RightsOfThePeople.com "Of the People, By the People, For the People" Site Design, Layout and Programming by: One-Serve.com "Design Excellence"
Three Types of Learning Styles There are three major types of learners: visual, auditory, and tactile/kinesthetic. Each person has a learning style that is best for his intake and comprehension of new information. Visual learners generally think in terms of pictures and learn best from visuals and handouts. Auditory learners learn best by listening. They usually like lecture and classroom discussions, and they might need to read written material aloud in order to fully understand it. Tactile/kinesthetic learners learn through touching, feeling, and experiencing the world around them. They do well with hands-on experiments, but they may have a hard time sitting through lectures and notes. A visual learner is someone who needs to see a word written down to remember it. An auditory learner would remember a word better by hearing it or saying it out loud. A tactile/kinesthetic learner would probably choose to write down the word in order to learn it best. Many people have a single learning style that works best for them. However, unless you are physically disabled, you can actually learn through all three learning styles. Effective Use of Learning Styles As a teacher, you will find that many of your students are best at tactile/kinesthetic learning. Because traditional classroom teaching techniques often target visual and auditory learning styles, these students get bored and have trouble concentrating. It can be hard to incorporate tactile/kinesthetic learning all of the time. Don't try to force the issue, but whenever possible, look for lessons that lend themselves to this type of learning. For example, simulations and roleplaying allows students to get more hands-on and actually experience what they are learning. Ineffective Use of Learning Styles As you consider your students' dominant learning styles, don't go overboard and assume that they cannot learn in other ways. While other styles might be more difficult for them, your students should learn to adapt to all types of instruction. You can help them prepare for less sympathetic teachers by showing them techniques they can use to enhance their learning through each type of style.
On this day in 1777, the Continental Congress promotes Colonel Ebenezer Learned to the rank of brigadier general of the Continental Army. Learned was an experienced military man who served the British during the French and Indian War. In 1757, he contracted smallpox at Fort Edward near Lake George in New York and spent a month confined to the hospital. At the end of the war, he returned to farming in Oxford, Massachusetts. Upon the outbreak of the American Revolution, though, Learned became active in a local militia before being named colonel of the Massachusetts Committee of Safety in April 1775. Learned was given command of the pivotal Dorchester Heights position at the siege of Boston in March 1776 by General George Washington and was the first to enter Boston after the British evacuation. Due to illness, Learned was forced to temporarily resign his position in May 1776, but returned to active duty in April 1777. After being promoted to brigadier general, Learned was reassigned to the Northern Department of the Continental Army, leading troops in several battles, including the Battle at Freeman's Farm in September 1777 and the pivotal Battle of Saratoga in October 1777. Following the American victory at the Battle of Saratoga, General Learned was ordered to join General Washington at Valley Forge, where Learned formed and commanded a division within the Massachusetts Brigade under General Baron Johan DeKalb. Due to continued health problems, Learned was forced to resign his position for good in March 1778, but continued to serve Massachusetts in several elected positions until his death in 1801.
To escape prying predators, fragile fauna often become masters of the art of disguise. Take the leaf insects of Southeast Asia: They so strongly resemble leaves that they blend in with the surrounding foliage. Because the 37 species of leaf insect now live in one corner of the planet, entomologists had assumed that their camouflage was a relatively recent adaptation. In January 2007, however, researchers announced the discovery of the first fossil leaf insect: a well-preserved, 47-million-year-old specimen from Messel, Germany. Named Eophyllium messelensis, the insect looks almost identical to its modern relatives, indicating the group is ancient, was once widespread, and has hardly changed in millions of years. Biologist Sonja Wedmann, then at the Institute of Paleontology at the University of Bonn, analyzed the fossil after it was dug up from oil shale deposits in what was once a small lake formed by volcanic activity. The period when the insect lived, the Eocene, was one of the warmest in history, and lush tropical or subtropical rain forest surrounded the lake; the two-and-a-half-inch-long adult male most likely sat and snacked upon the leaves of plants from the laurel or the pea family. So complete is the fossil—its head, antennae, thorax, wings, legs, and leaf-shaped abdomen intact—that it provides key clues on how it hid from predatory birds and primates: Its curved forelegs form a notch into which the insect could tuck its head. “We can infer that the fossil leaf insect showed the same behavior as extant leaf insects do,” Wedmann says. “It is hiding its head between the legs and sitting still on the leaf during the day, and when it is disturbed, it rocks from side to side like a leaf.” Go to the next story: 93. Andean Crops Cultivated Almost 10,000 Years Ago
The Leonid Meteor Shower Meteors: Showers and Storms Watching the sky at night, a casual observer may see from 3 to 5 sporadic meteors per hour. However, on some nights this number may increase markedly, and on projecting the paths of the meteors back we find that many appear to radiate from a very small area in the sky. This point or area is termed the radiant of the meteor shower. There are several regular meteor showers that occur each year and typical hourly rates may vary from 5 to 50 (above the sporadic background rate). Very infrequently, and sometimes without warning, an hourly rate in excess of several hundred shower meteors may occur. This is generally termed a meteor storm. Comets and Meteors A meteor shower may occur when the Earth passes near the past orbit of a comet (or in rare cases an asteroid cluster - eg the Geminid meteor shower in December). Comets travel in highly eccentric orbits and solar heating causes outgassing and the release of small low density particles along the orbit. Gravitational perturbations from planets, and other effects cause these particles to spread out around and away from the orbit. A new comet will leave behind a concentrated stream of debris that the Earth may pass by in a few hours. After a very long lapse of time, this debris will spread out into a very broad stream, and it may take the Earth several days to pass through it. Old debris streams tend obviously to have a lower density of particles than newer compact streams. When the Earth intercepts a particle debris stream, the individual particles travel through the atmosphere. Large frictional forces heat the particle and the surrounding atmosphere and a visible meteor is seen. The meteor typically is formed around 100 km altitude. Few particles or meteoroids survive below 80 km. The Leonid meteor shower occurs from about 14 to 20 November as the Earth passes through an old debris stream left by past passages of the comet Temple-Tuttle. The maximum rate occurs within a day or so of November 17 and is usually less than 10 per hour. The meteors appear to come from a radiant that lies within the "sickle" of the constellation of Leo (hence the name). An unusual feature of this stream is that it is often associated with some fairly bright meteors that may leave a trail (called a train) behind that is visible for many seconds, and sometimes even minutes. The meteors travel very fast and the brighter meteors may show a golden colour. In fact these meteors are the fastest of any meteor stream so far observed. Right Ascension - 153 degrees / 10h 12m Declination - +22 degrees Rises - 01:30 Local Time Transits - 07:00 L Sets - 12:00 L (times vary with latitude and are approx for Australasian area) Dates of detectable meteors: 14 - 20 November Broad peak of 4 days centred around November 17 Visual hourly rate: 10 during 03:00 to 04:00 Local Time (this varies enormously with year and observing conditions) Leonid Meteor Storms Comet Temple-Tuttle has an orbital period of about 33 years, and this has been associated with a massive increase in meteor numbers. That is, if the Earth passes near this comet's orbit close to the time that the comet itself has passed by the same point, then recent concentrated particle outbursts may be intercepted and cause a meteor storm. Two such famous Leonid storms occurred in 1833 and in 1966, both with peak meteor rates estimated at up to 100,000 per hour! Predictions from orbital studies and debris simulations indicate that 1998 or 1999 may be years in which significant Leonid activity again occurs. Estimated rates vary from 100 to 100,000 per hour. Such predictions are notoriously unreliable, and in fact two different simulations have predicted two quite different outcomes. In 1998, Australia (and particularly Western Australia) is in the most favoured position to observe any significant increase in Leonid meteor activity. A meteor storm is most favoured to occur during the hours of 1900 to 2000 Universal Time, and this corresponds to 0300 - 0400 Western Australian Standard Time, the best hour in which to observe meteor activity in general. There is no moon to interfere with observations, and even if no storm eventuates, most people should see a reasonable collection of bright meteors during this time. Apart from the visual display (and a meteor storm is indeed a very awe-inspiring sight - some have even credited the 1833 storm with inspiring a number of new religions that are still with us today), a meteor storm has a number of implications for the space environment. As well as the visual phenomenon, a meteor also (and in fact primarily) produces a column of ionisation in the upper atmosphere at altitudes of between 150 and 50 km. This ionisation will affect the propagation of radio waves that pass by. Low frequency VHF signals (30 - 100 MHz) will be reflected by the ionisation trail and may enable brief communications over distances up to 2200 km. This propagation mechanism is known as meteor burst communication and is a low cost way of obtaining data from remote sites. Essentially what happens is that the acquiring site broadcasts a pilot tone. When the remote site detects this tone (from a meteor event), it quickly (in a fraction of a second) transmits its data. This method of communication is relatively secure (due to high trail aspect selection), good in an electromagnetically disturbed environment, and very suitable for high latitude sites where satellites are difficult to access. However, what can be beneficial to one user is interference to another. Normal VHF communication, which is essentially line of site, may receive interference (via meteor propagation) from other more remote sites. High Frequency radars (eg OTH-B) may experience interference due to meteor echoes which can be spread widely in Doppler space. In the instance of a meteor storm, the ionisation created may be so intense that a new low altitude ionospheric layer is created. If this layer is widely distributed it may act to block signals reaching the higher F-layer, and create an effect similar to blocking sporadic-E. It will also undoubtedly allow reflection of VHF signals on a continuous basis, with concommittant interference to TV transmissions, especially in areas of low primary signal strength. Apart from the aspect of radio propagation, which may be beneficial, benign or destructive, there is also the element of physical damage to space systems by the particles themselves. These meteoroids may strike a spacecraft and cause varying degrees of damage according to their size and density. Most shower meteoroids are very low density (below that of water) friable or crumbly material. Still, the damage they can cause increases with their velocity and at Leonid velocities complete vapourisation of the impactor and the impacted will result. At the very least, a collision between a Leonid meteoroid and a spacecraft could result in "sandblasting" of optical surfaces (eg solar cells) with a consequent reduction of efficiency. The Hubble Space Telescope will in fact be turned to face the opposite direction to the Leonid particles during times of predicted activity. In the extreme case, a large Leonid meteoroid could impact a satellite and cause sufficient damage so as to render the craft unusable. During the early days of space travel, the meteoroid hazard in general was considered a significant threat and NASA spent a considerable amount of money in trying to define the hazard nature and extent. Fortunately, the threat proved to be a lot less worrisome than first thought, although for large structures, such as the International Space Station (ISS), which will orbit for lengthy periods, the meteoroid threat does become significant, and shielding against this threat will be carried by the ISS. If a Leonid storm rate of 100,000 per hour does eventuate, several predictions have been made that at least one if not several geosynchronous and/or low Earth orbiting satellites may be rendered inoperational. It has also been pointed out the debris hazard at the Earth-Sun Lagrange point (normally known as the L1 point) may be even more intense than on the Earth. This area is host to a number of scientifc satellites that observe the Sun and measure parameters of the space environment. If any of these were to be destroyed, we would lose a substantial space weather predictive ability which we have only recently attained. When a meteoroid is massive enough and of suitable internal strength, a percentage of it may survive the ablation that it suffers in its progress through the atmosphere and impact the Earth's surface from whence it is then known as a meteorite. However, cometary material is generally neither massive enough nor structurally strong enough to survive entry through a planetary atmosphere. We thus do not expect a meteor shower such the Leonids to be a hazard to the biosphere or even to drop any meteorites. Most meteorites are believed to originate from asteroid fragments. These also produce visible meteors. A one gram piece of material is generally believed to produce a magnitude zero meteor - that is, one as bright as the brightest stars. To drop a meteorite, the general rule is that the parent meteoroid has to have a mass greater than 100 kilogram, and produce a meteor of magnitude -18 (at night this is bright enough to light up all surrounding objects and allow an observer to read a newspaper with ease). The meteorite that might result from such a meteoroid will typically have a mass of only one kilogram, or 1% of its progenitor. Visually, the early morning hours of November 18 (Australian local time - which is Nov 17, 1900 UT) may see an intensely spectacular Leonid meteor storm. It is more probable however, that a much less intense shower of a few hundred meteors per hour will eventuate. Some space environmental effects may be noted, as outlined above. Those with cloudy skies may still be able to monitor any activity by tuning to a distant FM station (not normally received), using an outdoor antenna, and listening for bursts of signal. Rising early on this morning is probably a worthwhile activity. whatever eventuates. Finding Out More The November 1998 issue of Sky and Telescope has many interesting and well written articles on the Leonids, both past and present. Definitely a collectors item. Cambridge University Press has just released a book by Mark Littmann entitled The Heavens on Fire: The Great Leonid Meteor Storms. This is the only book so far written exclusively about the Leonids. 288 pages in length and its ISBN is 0-521-62405-3. Material prepared by John Kennewell
An equation is a mathematical statement, in symbols, that two things are exactly the same (or equivalent). Equations are written with an equal sign, as in The equation above is an example of an equality: a proposition which states that two constants are equal. Equalities may be true or false. Equations are often used to state the equality of two expressions containing one or more variables. In the reals we can say, for example, that for any given value of it is true that The equation above is an example of an identity, that is, an equation that is true regardless of the values of any variables that appear in it. The following equation is not an identity: It is false for an infinite number of values of , and true for only two, the roots or solutions of the equation, and . Therefore, if the equation is known to be true, it carries information about the value of To solve an equation means to find its solutions. Many authors reserve the term equation for an equality which is not an identity. The distinction between the two concepts can be subtle; for example, is an identity, while is an equation, whose roots are . Whether a statement is meant to be an identity or an equation, carrying information about its variables can usually be determined from its context. Letters from the beginning of the alphabet like a, b, c... often denote constants in the context of the discussion at hand, while letters from end of the alphabet, like x, y, z..., are usually reserved for the variables, a convention initiated by Descartes. If an equation in algebra is known to be true, the following operations may be used to produce another true equation: - Any quantity can be added to both sides. - Any quantity can be subtracted from both sides. - Any quantity can be multiplied to both sides. - Any nonzero quantity can divide both sides. - Generally, any function can be applied to both sides. (However, caution must be exercised to ensure that one does not encounter extraneous solutions.) The algebraic properties (1-4) imply that equality is a congruence relation for a field; in fact, it is essentially the only one. The most well known system of numbers which allows all of these operations is the real numbers, which is an example of a field. However, if the equation were based on the natural numbers for example, some of these operations (like division and subtraction) may not be valid as negative numbers and non-whole numbers are not allowed. The integers are an example of an integral domain which does not allow all divisions as, again, whole numbers are needed. However, subtraction is allowed, and is the inverse operator in that system. If a function that is not injective is applied to both sides of a true equation, then the resulting equation will still be true, but it may be less useful. Formally, one has an implication, not an equivalence, so the solution set may get larger. The functions implied in properties (1), (2), and (4) are always injective, as is (3) if we do not multiply by zero. Some generalized products, such as a dot product, are never injective. - Mathematical equation plotter: Plots 2D mathematical equations, computes integrals, and finds solutions online. - Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y). - WZGrapher: A Windows freeware program that plots Cartesian and polar equations, with both integration and differentiation solvers and graphing capabilities. - EqWorld — contains information on solutions to many different classes of mathematical equations. - EquationSolver: A webpage that can solve single equations and linear equation systems. - WebGraphing.com: Online Equation Plotter with Automatic Table of Coordinates - Online Equation Solver
The way that humanity reacts to climate change may do more damage to many areas of the planet than climate change itself unless we plan properly, an important new study published in Conservation Letters by Conservation International’s Will Turner and a group of other leading scientists has concluded. The paper Climate change: helping nature survive the human response, looks at efforts to both reduce emissions of greenhouse gases and potential action that could be taken by people to adapt to a changed climate and assesses the potential impact that these could have on global ecosystems. In particular it notes that one fifth of the world’s remaining tropical forests lie within 50km of human populations that could be inundated if sea levels rise by 1m. These forests would make attractive sources of fuel-wood, building materials, food and other key resources and would be likely to attract a population forced to migrate by rising sea levels. About half of all Alliance for Zero Extinction sites — which contain the last surviving members of certain species — are also in these zones. Dr Turner said: “There are numerous studies looking at the impacts of climate change on biodiversity, but very little time has been taken to consider what our responses to climate change might do to the planet.” The paper notes that efforts to reduce greenhouse gas emissions by constructing dams for hydropower generation can cause substantial damage to key freshwater ecosystems as well as to the flora and fauna in the flooded valleys. It also notes that the generally bogus concept that biofuels reduce carbon emissions is still being used as a justification for the felling of large swathes of biodiverse tropical forests. The report also reviews studies examining the complex series of outcomes in historical examples of climate change and environmental degradation, and humanity’s efforts to adapt to changing circumstances. Migration caused in part by climatic instability in Burkina Faso in the late 20th century, for example, led to a 13 per cent decline in forest cover as areas were cleared for agriculture, and a decline in fish supplies in Ghana may have led to a significant increase in bushmeat hunting. Dr Turner added: “If we don’t take a look at the whole picture, but instead choose to look only at small parts of it we stand to make poor decisions about how to respond that could do more damage than climate change itself to the planet’s biodiversity and the ecosystem services that help to keep us all alive. “While the Tsunami in 2004 was not a climate event, many of the responses that it stimulated are comparable with how people will react to extreme weather events — and the damage that the response to the Tsunami did to many of Aceh province’s important ecosystems as a result of extraction of timber and other building materials, and poor choices of locations for building , should be a lesson to us all.” Although the challenge of sustaining biodiversity in the face of climate change seems daunting, the paper notes that we must — and can — rise to the challenge. Turner adds: “Climate change mitigation and adaptation are essential. We have to ensure that these responses do not compromise the biodiversity and ecosystem services upon which societies ultimately depend. We have to reduce emissions, we have to ensure the stability of food supplies jeopardized by climate change, we have to help people survive severe weather events — but we must plan these things so that we don’t destroy life-sustaining forests, wetlands, and oceans in the process.’ The paper concludes that there are many ways of ensuring that the human response to climate change delivers the best possible outcomes for both society and the environments, and notes that in particular, maintaining and restoring natural habitats are among the cheapest, safest, and easiest solutions at our disposal to reduce greenhouse-gas emissions and help people adapt to unavoidable changes. Dr Turner said: “Providing a positive environmental outcome is often the best way to ensure the best outcome for people. If we are sensible, we can help people and nature together cope with climate change, if we are not it will cause suffering for people and serious problems for the environment.” Source: Science Daily As a species, we need to move from changing the environment to suit our needs to adapting to our environment like most other species on our planet. Unfortunately in many cases, we’ve modified our environment without forethought and the long-term effects of our behaviour has led to degraded ecosystems with limited natural function. Whether climate change has been man-induced and hastened by our activities or whether it is a natural cyclic event, we need to ensure that our ecosystems that sustain us are in the best shape possible to endure the effects of climate change. We can reduce our environmental footprint by conserving water and using less energy. A shift from using fossil fuels to sustainable, renewable energy is inevitable; the deciding factor of whether we will survive as species is how quickly we can change. Solar energy is developing rapidly as solar panels increase in efficiency; this renewable energy source is possibly the most accessible type to the average person. Yes Solar have sourced a variety of high quality solar water heating systems and solar geysers to suit a wide range of applications and budgets. We offer impartial advice and all our solar power systems are Eskom-approved and are installed by Eskom-accredited technicians (so are eligible for substantial Eskom renewable energy rebates of up to R8000). Our Water Rhapsody water systems are backed by nearly 17 years of experience and expertise. Water Rhapsody is simply number one in South Africa for rainwater harvesting and grey water recycling systems. Integral to many of our water conservation systems are water tanks; we supply and install the full range or JoJo water tanks and water tank stands (we are authorized JoJo Tank dealers in South Africa). Contact us for a free quote!
There are many places on Earth that have not yet been researched by man. Volcanoes, for instance, have been researched in depth, but no one has ever been inside an active volcano. NASA’s Jet Propulsion Laboratory (JPL) is best known for their outer space endeavours, but scientists have now developed a new robot capable of exploring some of the most inaccessible areas of our own planet. Researchers at JPL announced recently that they have begun testing VolcanoBot 1 in Hawaii, sending the tiny robot into inactive fissures on Kilauea (which is still an active volcano). “We don’t know exactly how volcanoes erupt. We have models but they are all very simplified. This project aims to help make those models more realistic,” JPL postdoctoral fellow Carolyn Parcheta said in a statement. The team hopes that by mapping the underground fissures they will be able to compile a 3D map of the fissure, something that was only estimated before. VolcanoBot 1 is a two-wheeled robot, about a foot long and just under seven inches tall. It is capable of sending back information about the now empty fissures. On its first trip down into Kilauea, it went 82 feet below the surface, but researchers hope it will travel further. Mapping out these fissures will help scientists understand how magma travels to the surface and how eruptions occur. The smaller and more advanced VolcanoBot 2 is scheduled to tackle the same volcano in March of this year. “In order to eventually understand how to predict eruptions and conduct hazard assessments, we need to understand how the magma is coming out of the ground,” says Parcheta. “This is the first time we have been able to measure it directly, from the inside, to centimeter-scale accuracy.” NASA is taking part in this research because they could eventually expand the creation of specialized robots designed to explore fissures of Mars, Jupiter’s moon Europa or our own personal moon. Dropping these miniature robots down crevices inaccessible by humans is a way of gathering important information about volcanoes on and off Earth.
The Scientific Method Long before the world accepted cold science and scientific investigation, issues and “truths” were determined through arguments. Sadly, arguments do not give assurance the truthfulness of a belief. One must have proof first before jumping to any conclusion. And proof is derived only after careful observation and disciplined experiments are performed. This process is called scientific method. What’s the scientific method? The “scientific method” is not really something new because we apply it everyday, although we may not be conscious of it. Instinctively, we solve a problem systematically. The scientific method is the manner of organizing our thoughts to help us find the most effective and efficient answer to our problem. II. STEPS IN SCIENTIFIC METHOD How do I use the scientific method? The scientific method is a series of steps that you need to follow when you’re solving a problem. In this blog, we present a simple five-step scientific method that may help you solve your problems. Problem: First you should know the problem that you wanted to solve. Determine your purpose for using scientific method. It is easier to come up with the answer if one thoroughly understands the problem. Data: What are the necessary information needed to solve your problem? You need to have some observations regarding the problem you are trying to solve.You also need to formulate questions that may lead you to more important details about the problem. Hypothesis: An educated guess, the possible answer. This would include the first question: “What are the possible ways to solve my problems?” You may come up with more than one hypothesis. A hypothesis is comprised of an independent and a dependent variables.The independent variable shows what you are going to do to solve the problem and the dependent variable shows the possible outcome of what you will do. Thus, your hypothesis should be testable. Remember that a hypothesis is different from a theory. A hypothesis is often known as a “guess” or a “working assumption.” On the other hand, a theory is a conceptual framework that explains existing observations and predicts new ones. Experiments:This is the act of testing your hypothesis. This process will show you which hypothesis corresponds to the right answer to your problem. This might also include the procedures you have followed in your experiments. This will be the key in coming up with the right answer to your problem. Conclusion: What do the results mean? Was one of your hypotheses correct? You can make this as short as you want to. A short conclusion would be easier to understand, hence, the shorter the conclusion, the better. Fun doing Exercises Visit this site:
Asthma is a chronic long-term lung disease that inflames and narrows the airways. Asthma causes recurring periods of wheezing, a whistling sound when you breathe, chest tightness, shortness of breath, and coughing. The coughing often occurs at night or early in the morning. Asthma affects people of all ages, but it most often starts during childhood. To understand asthma, it helps to know how the airways work. The airways are tubes that carry air into and out of your lungs. People suffering from asthma have inflamed airways. The inflammation makes the airways swollen and very sensitive. The airways tend to react strongly to certain inhaled substances exercise or cold air. When the airways react, the muscles around them tighten. This narrows the airways, causing less air to flow into the lungs. The swelling also can worsen, making the airways even narrower. Cells in the airways might make more mucus than usual. Mucus is a sticky, thick liquid that can further narrow the airways. This chain reaction can result in asthma symptoms. Symptoms can happen each time the airways are inflamed. Sometimes asthma symptoms are mild and go away on their own or after minimal treatment with asthma medicine. Other times, symptoms continue to get worse. When symptoms get more intense and/or more symptoms occur, you’re having an asthma attack. Asthma attacks also are called flareups or exacerbations. Treating symptoms when you first notice them is important. This will help prevent the symptoms from worsening and causing a severe asthma attack. Severe asthma attacks may require emergency care, and they can lead todeath. Asthma has no cure. Even when you feel fine, you still have the disease and it can flare up at any time. However, with today’s knowledge and treatments, most people who have asthma are able to be completely controlled so that they can live a normal life, this includes participation in sports and other physical activities. If you have asthma, you can take an active role in managing the disease. For successful, thorough, and ongoing treatment, build strong partnerships with your doctor and other health care providers.
Slope-stability or mass-movement problems occur where either sediment and/or rock and/or snow move downslope in response to gravity. Potential slope-stability problems exist wherever development has taken place at the base of Rockfall at Upper Island Cove An eight tonne block of stone sits on top of a car parked beside a house after falling from the top of a 100 m slope. Reproduced by permission of the Government of Newfoundland and Labrador © 1999. Downslope movement is a natural process, but can be accentuated by undercutting of the base of slopes, clearance of stabilizing vegetation, or diversion of natural drainage. Types of downslope movement include landslide, avalanche, rockfall, rock slip, and rotational slumps. The first three are rapid events, and generally the most dangerous to life and property. Mass Movement Types. Some main types of mass movement. Variations in water content and rates of movement produce a variety of forms. A rockfall is simply a volume of rock made up of individual pieces that fall independently through the air and hit a surface. A debris avalanche is a mass of falling and tumbling rock, debris, and soil. It is differentiated from a slower landslide by the tremendous velocity of onrushing material. The extreme danger of a debris avalanche results from its high speed and consequent lack of warning. A landslide is a sudden rapid movement of a cohesive mass of material (soil, rock, etc) that is not saturated with moisture. It involves a large amount of material failing simultaneously. A common type of slide is the rotational slide or slump which occurs when surface material moves along a concave surface. Frequently water is present along this movement plane and acts as a lubricant. The simplest form of rotational slump is when a small block of land shifts downward. The upper surface of the slide appears to rotate backward and often remains intact. When the moisture content of moving material is high, the term flow is used. Flows include earthflows and more fluid Adapted by Tina Riche, 2000.
Here on Earth we have access to one of the most plentiful sources of power known to man – the sun. But how does solar power work? Around 70% of light from the sun makes its way to Earth – the rest is reflected back into space. Now we’ve learned to convert this energy into electricity using solar photovoltaic (or solar PV) panels. Solar panels are made up of many solar ‘cells’. Each cell contains two thin layers of silicon on top of one another. The top layer has been treated so that the atoms within it are unstable and have too many electrons, while the bottom layer has been treated so that there are fewer atoms, with plenty of space for them to move around. Only after the cells are exposed to sunlight can the atoms move from the top layer to the bottom layer. Once light hits the top layer, the electrons become ‘excited’ and start moving towards the bottom layer. Once the electrons are moving in the same direction, electricity is created, and this is harnessed by two metal contacts placed on either side of the silicone, which creates a circuit. The electricity created by solar power is known as direct current, whereas the electricity we use on a day-to-day basis is alternating current, so before the solar energy can be used it must first go through an inverter. After this, the energy can be used in the same way as power supplied by electricity companies is used. Solar energy can be harnessed in lots of different ways – from small solar panel installations on top of domestic roofs to massive ‘solar gardens’ in fields and deserts. The best news is that there’s plenty of power to go around. In 2002, the earth absorbed more energy from the sun in a single hour than the world used in one year.
Historically, there is no aspect of the Aztec culture that shocks us more than human sacrifice. As hard as it is for us to imagine it, imagine how the first conquistadors felt when they witnessed it for the first time. It left them completely horrified, and then some of their own men met the same fate. To their horror, they would discover their fellow Spaniards grinning heads displayed on a tzompantli (a horizontal pole). The Aztec citizens first duty was to provide nourishment for the mother and father, the earth and sun. They believed to keep the sun on its course, and prevent darkness from over-taking the world forever, it was necessary everyday to feed it its precious “chalchiuatl” or human blood. Without blood the world would stop. Each time a priest stood on top of a pyramid and held up a bleeding heart and placed it in the “quauhxixalli” (where the hearts were burned with copal incense), a disaster was thwarted and the end of the world was postponed once more. The Aztec's believed that life was made out of death. The sacrifice of humans was not inspired by cruelty or hatred. Instead it was their response to a constantly threatening world. The victim was not thought of as an enemy, but as a messenger to the gods. The warrior who had taken the victim captive knew that one day he too would face the same destiny. Human sacrifice was believed to have been introduced to the Aztecs by the Toltecs. It was performed almost exactly the same way by the Mayans. First, conch horns sounded and the victim was led to the top of the pyramid. Four priests held the victim still as he laid stretched out on his back on a flat or convex stone. A fifth priest dressed in a black robe with long black hair cut out the heart with a flint or obsidian knife. Occasionally, victims were allowed to fight for freedom, and if he defeated several Aztec warriors he was allowed to live. Sadly, this almost never happened. These special warriors wore their own unique costumes and they were well respected. This was not the only type of sacrifice. Women were sacrificed during dances, children were sacrificed to the rain god Tlaloc, and victims were burned in fires in honor of the fire god. Also, like the Mayans, the Aztecs would dress their victims up to look like their god of choice, attach him to a wooden frame and shoot him full of arrows. When the Aztecs practiced cannibalism after the rites they believed that they were eating god's own flesh.
Polly Cooper cooked up strength and perseverance for the American soldiers. Little is known about Polly Cooper's childhood, but we know for sure the she was born into the Oneida tribe. Polly Cooper contributed in the war by helping George Washington's sick, starving, and suffering soldiers. Cooper is known to be the mother of this country. Chief Shenendoah sent Polly Cooper with forty other warriors to deliver 600 hundred baskets of corn to the starving soldiers at Valley Forge. Cooper decided to stay at Valley Forge when she saw how much the troops were suffering. She made special medicine for the sick and wounded. She taught the soldiers how to cook the corn, and she cooked for them as well. Cooper also brought water to the soldiers as they were fighting, because most of them were dehydrated. She must of been very brave and caring to do that. Polly Cooper's contribution to the war may have changed people's perspective of American Indians. People back then thought American Indians weren't civilized. After they saw her teaching the soldiers how to cook and made special medicine, people may have thought the American Indian's were more civilized than they thought they were. She believed that all men have mothers, and mothers didn't send their sons out to kill other mothers' sons. Cooper would not accept money as payment for taking care of the soldiers. The officers wives took Cooper a walk downtown. She saw a black shawl and thought it was beautiful. The officers bought it for her and it is now known as Polly Cooper's shawl. Polly Cooper sacrificed the company of her friends and family in her tribe for a few years to help George Washington's soldiers.
DNA is a (bio)chemical way of storing genetic (biological) information. It is basically a long stretched-out molecule consisting of modules (nucleotides, or as they're often called: bases). There are only four modules in a DNA molecule and they are the language in which the information is stored. The four nucleotides of which the DNA strand is composed are Adenine, Cytosine, Guanine and Thymine. The DNA molecule really consists of two strands of this long molecule, twisted into a helical shape. If on the one strand there is and Adenine (A) at a particular position, the other strand always has a Thymine (T) at that very same position, across from the A. The A and the T are chemically bonded to each other by 2 hydrogen bonds. In the same way Cytosine (C) and Guanine (G) always pair up using 3 hydrogen bonds. Because A always bonds with T, and C always with G, one only needs to know the sequence of the nucleotides on the one strand to know exactly what the sequence of nucleotides on the other strand will be. What I will refer to as a gene later on, is simply a stretch of DNA, with a start and an end. The pretty picture of the DNA helix above is credited to brian0918 and was taken from http://en.wikipedia.org/wiki/Image:ADN_animation.gif As cells divide, they need a duplicate of their genetic information, their DNA. To do this, the DNA helix is separated and a protein complex reads the information on the one strand, to produce the other strand (this is called the complementary strand). Since the same is done to both of the original strands, after DNA replication is complete, there are two copies of the original DNA molecule(s). After all the DNA in a cell is replicated, the cell can divide in two, which each half getting a complete set of DNA. Transcription of DNA refers to production of a molecule called RNA, which is similar to DNA, but different in a few respects. Only one strand of RNA is made at the time, instead of 2 during DNA replication. RNA does not have any Thymine, but uses Uracil (U) instead. And lastly, each RNA nucleotide has a slightly different chemical make-up than a DNA nucleotide, it is made up of ribose sugars, instead of deoxyribose sugars like DNA. Reverse transcription is the process by which an RNA molecule is read and converted into DNA. Retroviruses use this method to convert their genetic information, which is in the form of RNA into DNA. The DNA can then be inserted into the genome of the host. Handy tool, isn't it? During the process called translation, a particular kind of RNA molecule (messenger RNA, mRNA) is read by a large protein complex called the ribosome. The ribosome "reads" the sequence of nucleotides, and stitches together strings of amino acids, one for every three nucleotides it has read. The amino acid strings are called peptides, or proteins and fold up into complex three-dimensional structures. These proteins are the big workers in the cells. They carry out all chemical reactions, including the processes described above of replication, transcription, and translation. Proteins are the workhorses of the cell.
THE LIVING WORLD Unit Three. The Continuity of Life The attraction that holds the two DNA strands together is the formation of weak hydrogen bonds between the bases that face each other from the two strands. That is why A pairs with T and not C; A can only form hydrogen bonds with T. Similarly, G can form hydrogen bonds with C but not T. In the Watson-Crick model of DNA, the two strands of the double helix are said to be complementary to each other. One chain of the helix can have any sequence of bases, of A, T, G, and C, but this sequence completely determines that of its partner in the helix. If the sequence of one chain is ATTGCAT, the sequence of its partner in the double helix must be TAACGTA. Each chain in the helix is a complementary mirror image of the other. This complementarity makes it possible for the DNA molecule to copy itself during cell division in a very direct manner. But, there are three possible alternatives as to how the DNA could serve as a template for the assembly of new DNA molecules. First, the two strands of the double helix could separate and serve as templates for the assembly of two new strands by base pairing A with T and G with C. This is what happens in figure 11.5a, with the original strand colored blue and the newly formed strands red. After replicating, the original strands rejoin, preserving the original strand of DNA and forming an entirely new strand. This is called conservative replication. In the second alternative, the double helix need only “unzip” and assemble a new complementary chain along each single strand. This form of DNA replication is called semiconservative replication, because while the sequence of the original duplex is conserved after one round of replication, the duplex itself is not. Instead, each strand of the duplex becomes part of another duplex. You can see in figure 11.5b that the blue strand is from the original helix and the red strand is newly formed. In the third alternative, called dispersive replication, the original DNA would serve as a template for the formation of new DNA strands but the new and old DNA would be dispersed among the two daughter strands. As shown in figure 11.5c, each daughter strand is made up of sections of original (blue) strands and new (red) strands. Figure 11.5. Alternative mechanisms of DNA replication. The three alternative hypotheses of DNA replication were tested in 1958 by Matthew Meselson and Franklin Stahl of the California Institute of Technology. These two scientists grew bacteria in a medium containing the heavy isotope of nitrogen, 15N, which became incorporated into the bases of the bacterial DNA (the upper petri dish in figure 11.6). After several generations, samples were taken from this culture and grown in a medium containing the normal lighter isotope 14N, which became incorporated into the newly replicating DNA. Bacterial samples were taken from the 14N media at 20 minute intervals (2 through 4). DNA was extracted from all three samples and a fourth sample, 1, that served as a control. Figure 11.6. The Meselson-Stahl experiment. Bacterial cells were grown for several generations in a medium containing a heavy isotope of nitrogen (15N) and then were transferred to a new medium containing the normal lighter isotope (14N). (The bacteria shown here are not drawn to scale, as tens of thousands of bacterial cells grow on even a tiny portion of a plate in culture.) At various times thereafter, samples of the bacteria were collected, and their DNA was dissolved in a solution of cesium chloride, which was spun rapidly in a centrifuge. The labeled and unlabeled DNA settled in different areas of the tube because they differed in weight. The DNA with two heavy strands settled down toward the bottom of the tube. The DNA with two light strands settled higher up in the tube. The DNA with one heavy and one light strand settled in between the other two. By dissolving the DNA they had collected in a heavy salt called cesium chloride, and then spinning the solution at very high speeds in an ultracentrifuge, Meselson and Stahl were able to separate DNA strands of different densities. The centrifugal forces caused the cesium ions to migrate toward the bottom of the centrifuge tube, creating a gradient of cesium concentration, and thus a gradation of density. Each DNA strand floats or sinks in the gradient until it reaches the position where its density exactly matches the density of the cesium there. Because 15N strands are denser than 14N strands, they migrate farther down the tube to a denser region of cesium. The DNA collected immediately after the transfer was all dense, as shown in test tube 2. However, after the bacteria completed their first round of DNA replication in the 14N medium, the density of their DNA had decreased to a value intermediate between 14N-DNA and 15N-DNA, as shown in test tube 3. After the second round of replication, two density classes of DNA were observed, one intermediate and one equal to that of 14N-DNA, as shown in test tube 4. Meselson and Stahl interpreted their results as follows: After the first round of replication, each daughter DNA duplex was a hybrid possessing one of the heavy strands of the parent molecule and one light strand; when this hybrid duplex replicated, it contributed one heavy strand to form another hybrid duplex and one light strand to form a light duplex. Thus, this experiment clearly ruled out conservative and dispersive DNA replication, and confirmed the prediction of the Watson-Crick model that DNA replicates in a semiconservative manner. How DNA Copies Itself The copying of DNA before cell division is called DNA replication. This process is overseen by six proteins in prokaryotes (in eukaryotes, some of the enzymes are different). These proteins coordinate the unwinding of the DNA duplex and assembly of new complementary DNA strands by the addition of nucleotides to existing strands (figure 11.7). Here is how the process works. Figure 11.7. How nucleotides are added in DNA replication. In a nucleotide, the phosphate group is attached to the 5' carbon atom of the sugar, and an OH group is attached to the 3' carbon atom. So, on a DNA strand, there will be a 5' phosphate at one end of the chain and a 3' OH at the other end. In the DNA double helix, the two strands of nucleotides pair up in opposite orientations, with one strand running 5' to 3' and the other running 3' to 5'. When nucleotides are added to a growing strand of DNA by the enzyme DNA polymerase III, the first phosphate group of the incoming nucleotide attaches to the OH group on the end nucleotide of the existing strand. Before DNA replication can begin, an enzyme called helicase separates and unwinds the strands of the parental DNA. Single-strand binding proteins stabilize the singlestranded regions of DNA before they are replicated. Helicase moves up the DNA helix, unwinding as it goes. After the parental DNA duplex is unwound, an enzyme complex called DNA polymerase III can add nucleotides complementary to the exposed template strand of DNA. But this enzyme cannot begin a new strand; it can only add nucleotides to an existing strand. Thus, before a new strand of DNA can be built, an enzyme called primase must synthesize a short section of joined RNA nucleotides, called a primer, complementary to the single-stranded template. DNA polymerase III can then add onto the primer and assemble a complementary new strand of DNA on each old strand. One of the new strands of DNA is called the leading strand; it is the one that is extended in a 5' to 3' direction toward the replication fork. DNA polymerase III builds the leading strand by adding nucleotides to the 3’ end of the strand: the 5’ phosphate end of a nucleotide to be added attaches to the 3’ sugar end of the existing strand. DNA polymerase III moves toward the replication fork and builds the leading strand as one continuous strand. Before the leading strand can be completed, another polymerase enzyme called DNA polymerase I removes the RNA primer and fills in the gap with DNA nucleotides. The newly synthesized hybrid DNA can then rewind into a helix. Because DNA polymerase III can only assemble new strands in the 5’ to 3’ direction, the other strand, called the lagging strand, is assembled in short 5’ to 3’ segments, moving away from the replication fork. Each lagging strand segment begins with an RNA primer, and then DNA polymerase III builds away from the replication fork until it encounters the previously synthesized section. These short stretches of newly synthesized DNA on the lagging strand are called Okazaki fragments. As the helix opens further, a new RNA primer is added, and DNA polymerase III must release the template it has completed and begin with the “new” template. The Okazaki fragments are then linked when DNA polymerase I removes the RNA primers and an enzyme called DNA ligase joins the ends of the newly synthesized segments of DNA. The entire lagging strand can only be replicated in this discontinuous fashion. Eukaryotic chromosomes each contain a single, very long molecule of DNA (figure 11.8), one far too long to copy all the way from one end to the other with a single replication fork. Each eukaryotic chromosome is instead copied in sections of about 100,000 nucleotides, each with its own replication origin and fork. Figure 11.8. DNA of a single chromosome. This chromosome has been relieved of most of its packaging proteins, leaving the DNA in its extended form. The residual protein scaffolding appears as the dark material on the left side of the micrograph. The enormous amount of DNA that resides within the cells of your body represents a long series of DNA replications, starting with the DNA of a single cell—the fertilized egg. Living cells have evolved many mechanisms to avoid errors during DNA replication and to preserve the DNA from damage. These mechanisms of DNA repair proofread the strands of each daughter cell against one another for accuracy and correct any mistakes. But the proofreading is not perfect. If it were, no mistakes such as mutations would occur, no variation in gene sequence would result, and evolution would come to a halt. Mutation will be discussed in more detail in the next section and in chapter 14. Key Learning Outcome 11.4. The basis for the great accuracy of DNA replication is complementarity. DNA's two strands are complementary mirror images of each other, so either one can be used as a template to reconstruct the other.