text stringlengths 255 17.6k |
|---|
Stop, Look, Listen
Lesson 7 of 10
Objective: Students will be able to understand and explain the importance of effective listening through obtaining information, evaluating the task and communicating the results.
Gather students on the rug using a preferred classroom management technique. I like to use my “Stop, look, listen.” The students stop what they are doing, look at me and listen for the direction. I usually preface the direction with, “When I say go…” This reminds the students to listen to the whole direction before moving to follow the directive.
In this case I would say, “When I say go I would like you to clear your space, push in your chair and go take a spot on your dot. Walking feet go.”
By saying “walking feet” I am reminding the students to use walking feet in the classroom to ensure safe movement between areas.
When all of the students are seated on their dot in the rug area I tell them I am going to read them a story about a rabbit that needs to learn how to listen.
“Team 203, I am going to read you a story about a rabbit who does not listen very well. He needs to learn how to listen for two very important reasons. One so he is safe from danger and two so he knows what to do and when to do it. The story is called Listen Buddy and it is written by Helen Lester and illustrated by Lynn Munsinger.”
Now I go ahead and read the story to the students.
I use this funny story to engage my students’ attention. The story helps the students see the importance of listening and this will help them when I begin our discussion in the activity part of the lesson.
Once the story is over I ask the students, “Can anyone tell me one way Buddy could have saved himself the trouble of accidently going to Scruffy Varmint’s house?”
I select a student who is following the correct classroom protocol of raising their hand.
“Well done Rachel; Buddy should have listened closely to directions.”
“Team 203 there is a difference between hearing the listening. Hearing is when you hear a sound like a voice. Listening is when you try to figure out what that voice is telling you.”
“Can anyone tell me why they think good scientists might need to listen carefully to directions?”
Once again I select a student who is raising their hand.
“Colin that is a good reason; scientists should listen carefully so they do not have accidents. What kind of accidents might a scientist have?”
I point to students who raise their hand to share their idea.
“Okay hands down, you all shared some good ideas. Spilling, burning, making explosions, breaking glass, are all accidents that can occur in a science lab.”
“Scientists not only need to listen carefully to prevent accidents, but they also need to listen carefully so they can follow experiments in the correct order and find items needed for those experiments.”
“Today for one of your work stations I am going to test how well you listen by taking you into the garden and giving you directions to find items based on its attributes.”
“Does anyone know what an attribute is?”
If any students raise their hand to respond I will select them, but that very seldom happens.
“An attribute is something which describes a specific characteristic of something or someone. For example, think back to when we did our facial features. Some of my attributes are blue eyes, brown hair and attached ear lobes. The attributes of this plastic chair I am sitting on are; shiny metal, blue plastic and small.”
“Raise your hand if you can give me an attribute of this plastic shape I am holding up?”
I select a student to respond.
“Great attribute Sebastian; this block is red. Can someone give me another attribute?”
I repeat the process until we have covered as many attributes as we can.
“Now that you all know what attributes are we are going to see just how well you can listen. At your work stations today you will be working on answering the question, “How well do I listen?””
Now I send the students over to the integrated work stations one table group at a time to maintain a safe and orderly classroom. It usually sounds like this;
“Table number one go get ready to have some following directions fun.
Table number two, you know what to do.
Table number three, hope you were listening to me, and
Table number four, you shouldn’t be here anymore.”
Once I have my group lined up ready to go outside into the garden I remind them we are going out to work not play.
“Group number one, I would like to remind you that we are going out into the garden to do a job. We are not going to play, we are going to work. This is your first test to see how well you listen. I will know you have listened when I see you working in the garden doing the right thing.”
I try to say only the behaviors I want to see because those are the words that will stick in the students’ brain – listen, work, garden, right thing.
When we reach the garden I get started right away so the students do not have time to lose focus.
“Group one I want you to listen closely to this set of directions. When I say “go,” take 10 steps straight forward, crouch down, and pick up something small, grey and hard. When you have the item bring it to me and check if that is right. Walking feet go.”
The student should come back with a stone from the garden path. I start off with an easy one because I want the students to experience success and develop confidence in their listening capabilities. With each success I increase the difficulty.
After 10 minutes of the students finding items I tell them, “I would like you to record one of your items in your science journal which I have brought out with me. You will need to sketch it quickly and accurately. Label it and give one attribute. You have 8 minutes so I want you to focus and get your work done.”
I hand out the journals as I speak. The science journals I use are composition books. I like the composition books because they have a hard cover which helps the students by providing them with a hard surface to press against when writing/recording. Using the composition books as a science journal means that I do not have to bring out clipboards which would be another thing I would have to worry about and switch out between groups; the less I have to deal with the better. Another advantage to using a composition book as a science journal means the students work is all in one place and I can easily grab it for assessment purposes. The students also like to compare their work from the beginning of the year to the end. As a rule I prefer to just take out pencils for the students to record with and they can add color details inside just before we switch stations.
Lower performing students may have difficulty labeling so I assist them by either acting as a scribe or sounding out with them.
Allow the students 18 minutes to work on this activity. After 18 minutes are up, the timer goes off and the students clean up ready to switch stations.
I set the visual timer and remind the students to look at it so they can use their time wisely.
In this activity the students are exploring how to receive information, evaluate the important pieces of information and communicate they listened by locating the correct item. .
At another station the students will play “Hide and Go Listen.” This game has the students focus on and become aware of their sense of hearing. Have the students sit quietly for one minute. At the end of one minute ask them, “What did you hear?” “Could you hear things we cannot see?” ‘Do you think you could find the things you could not see by using your ears?”
Select a student to be the “seeker.” Have that student hide his or her eyes. Give the other students a small musical instrument – bells for one, an egg shaker for another, sandpaper blocks for another, and rhythm sticks for another. Tell those students to hide. Have the seeker open his/her eyes and tell the hidden students to begin playing their instruments. The “seeker” must find all of the other students based on sound alone.
When all of the hidden students are found, ask the “seeker” which was the easiest sound to hear and if that was the person he/she found first (science – anatomy).
At another work station the students play an attribute guessing game called “Guess my Block.” One student selects an attribute block from a container of attribute blocks in the center of the table while the others have their eyes closed. The one student says, “I have a shape.” All of the other students select one shape from the container. The one student says, “I have a red shape.” All of the other students can make a switch if the shape they have is not red. The one student says, “I have a red thin shape.” Once again all of the other students can make a switch if they need to. Play continues on this way until the entire group has the same shape as the one in the selector’s hand. This game needs an adult assistance to begin with, but the students get better over time (ELA and math).
At another work station the students are sorting buttons onto plates labeled with different attributes. This gives the students practice at recognizing different attributes of a singular item – buttons (math). I use the attribute cards from the Making Learning Fun website. A parent volunteer can read the cards for the students and they listen closely to what they say to make sure they find the right buttons.
These activities provide the students with the opportunity to apply and expand their understanding of the concepts within new contexts and situations thus elaborating on the information they have been presented with.
When the time is up I blow two short blasts on my whistle and use the “Stop, look, listen” technique mentioned above.
“When I say go, I would like you to clean up your space remembering to take care of our things, push in your chair and take a spot on your dot.”
Once the students are seated I tell them that their exit slip for today is to share with us one item they found in the garden and one attribute it has.
“For today’s exit ticket you need to share with us one item you found in the garden. Show us what you drew; tell us what the item is and one attribute it had that helped you find it. For example, in my journal I drew a tomato. One attribute I used to find it was the color.”
"This kind of work is exactly what scientists do when they go out into the field to collect data. The scientists go out, collect data or evidence, and then bring their work back to the science lab to share their results with other scientists."
“When you have shared your item with us you may use the hand sanitizer and get your snack.”
I use the Fair Sticks to determine the order of the students.
If a student is unable to give me an answer, they know they can do one of two things.
- They can ask a friend to help, or
- They can wait until everyone else has gone and then we will work on an attribute together.
I use this exit ticket process as a way for the students to analyze what they know about the importance of listening to directions and explain to me how they used my attribute clues to locate different items in the garden. During integrated work station time they experienced different activities which required them to listen closely to directions and clues so they should be able to explain one attribute. This quick assessment process allows me to see if the student is able to take information learned in one format can be transferred by the student to another format.
In order to assess if my students have successfully understood and retained the information presented in the lesson I evaluate each students performance based on the item they bring me while searching in the garden. I take a photo of what they bought me and then video them explaining why they bought me this particular item. Each student’s explanation is the most important part because it shows me how well he or she listened to the directions and attributes given. Using the i-pad makes this process easier because I can use an app such as Teachers Notes to record the image and video under the student’s name. When it comes to report card writing time I can open the student’s file and see all of the notes I have recorded. I can also use these recordings as evidence of learning, or in some cases lack of learning, during parent-teacher conferences, IEP and PST meetings.
The student’s explanation will also determine what kind of directive I can give them next. Some students may be able to handle many directions at once and some may still need step by step directions. Knowing my students listening abilities gives me the information I need to group my students into effective working teams. |
Practice the worksheet on yesterday, today and tomorrow, the questions are based on the sequence of the week-days, their names and order.
We know, the present day is today, the day before today is yesterday and the day after today is tomorrow.
1. How many days are there in a week?
2. Name all the days of a week.
3. Which is the first day of the week?
4. Which is the last day of the week?
5. Which day comes after Wednesday?
6. What is the name of the third day of the week?
7. Fill in the blanks:
(i) __________ comes before Tuesday.
(ii) __________ comes before Friday.
(iii) Saturday comes after __________.
(iv) Thursday comes after __________.
(v) __________ comes after Saturday.
8. (i) Today is Monday. Which day was yesterday?
(ii) Today is Tuesday. Which day is tomorrow?
9. (i) Today is Friday. Which day is tomorrow?
(ii) Today is Wednesday. Which day was yesterday?
10. Write the names of days being yesterday and tomorrow of:
Yesterday Day Tomorrow
__________ Saturday __________
__________ Sunday __________
__________ Tuesday __________
__________ Thursday __________
__________ Monday __________
__________ Wednesday __________
__________ Friday __________
11. If today is Monday, yesterday was Sunday. Which day will it be tomorrow?
12. (i) Two days just before Friday are __________ and __________.
(ii) Two days just after Monday are __________ and __________.
Answers for the worksheet on yesterday, today and tomorrow are given below to check the exact answers of the above questions of the seven days in a week.
1. seven days.
2. Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday
7. (i) Monday
8. (i) Sunday
9. (i) Saturday
10. Yesterday Day Tomorrow
Friday Saturday Sunday
Saturday Sunday Monday
Monday Tuesday Wednesday
Wednesday Thursday Friday
Sunday Monday Tuesday
Tuesday Wednesday Thursday
Thursday Friday Saturday
12. (i) Wednesday and Thursday
(ii) Tuesday and Wednesday |
Dragonflies have been around for 300 million years, making them one of the oldest species of insects in the world. Dragonflies have been so successful over the years that the only difference between modern and ancient dragonflies is size. One of the secrets to their success is how they mature. Dragonflies have three stages to their life: egg, nymph and adult. The length of each stage depends on the species of dragonfly. Dragonflies in tropical regions typically spend less time in each stage than dragonflies in temperate regions.
Dragonflies start their life as eggs. After breeding, a female dragonfly selects a likely looking pond or marsh in which to lay her eggs. Dragonfly eggs are only laid in still water, as eggs laid in quickly moving will wash into fish-feeding areas.
Female dragonflies lay their eggs on submerged aquatic plants, mud banks submerged in water or, if they can't find a better spot, directly in the water. Depending on the species, a female can lay hundreds or thousands of eggs during her lifespan.
In tropical regions dragonfly eggs may hatch in as little as five days. In temperate (areas where winter temperatures drop near or below freezing) regions, dragonfly eggs usually won't hatch until the following spring.
In tropical regions two to three generations of dragonflies may mate and lay eggs each year. In temperate regions usually only one generation mates and lays eggs. For dragonflies living in temperate regions, mating and egg laying typically occurs in mid to late summer.
When dragonflies hatch they are called nymphs. Dragonfly nymphs are voracious predators that have no resemblance to their adult forms.
Dragonfly nymphs moult (shed their skin) up to 12 times, depending on species, and can spend as long as four years as nymphs.
Dragonflies living in tropical regions spend less time in the nymph form while dragonflies living in temperate regions will spend longer as nymphs as the onset of winter delays maturation.
Dragonfly nymphs are aquatic, living in ponds and marshes until emerging to moult for one final time. During the final moulting the nymph's skin splits and the nymph emerges as an adult dragonfly.
Dragonfly nymphs are referred to as hemimetabolous, meaning they don't form a cocoon or pupate before emerging as an adult.
After the final moult from nymph to adult, occurring in late spring or early summer in temperate regions and at any time of the year in tropical regions, most dragonfly species spend the next month fully maturing. Their gonads (sex organs) finish developing, their colour becomes brighter with their final markings emerging and they disperse, sometimes hundreds of miles, from the pond or marsh where they developed.
Adult dragonflies are also voracious predators eating small insects, primarily mosquitoes and flies, which they catch while flying. Dragonflies can hover, fly backwards, forwards and sideways.
Once fully developed, a female dragonfly can mate with several males before she is ready to lay her eggs.
Both female and male dragonflies only live two to four months as adults before dying.
From egg to adult a dragonfly can live for five years before dying. Dragonflies in tropical regions don't live as long as dragonflies in temperate regions. The reason? Dragonflies in temperate regions overwinter as eggs or nymphs for several years before finally emerging as adults.
Dragonflies as nymphs and adults are voracious predators eating anything they can catch, including adult and larval mosquitoes.
Any permanent water feature will attract dragonflies. To encourage dragonflies to lay eggs in your pond, grow reeds and lilies that emerge from the water to give the female a place to perch while laying her eggs.
Fish will eat dragonfly nymphs and eggs. Sectioning off part of the pond from fish will give nymphs a safe place to mature. |
The math materials, like all other classroom materials, focus first on the concrete and then move toward abstraction. Students first focus on the numbers one to ten, mastering quantity, then the symbol and finally associating the two. A complete comprehension of this first stage is essential as it lays a solid foundation for future work in the decimal system. Students will be exposed to the operations of addition, subtraction, multiplication and division before they leave the Casa program.
Characteristics of mathematics materials:
- Children learn quantity, then symbol and then associate the two
- Abstract math and memorization of facts is a final step in our program
- Children work with the concrete materials
- Math focuses on the process, not the product
Ideas for home:
- Do not teach your child the math facts- they will first experience math through the sensorial materials
- Encourage your child to count everything – cutlery on the table, oranges in the grocery cart, etc.
- Count orally – establish the pattern/sequence of numbers |
Equal — the same in number, amount, degree, rank or quality. Not changing, the same for every person.
Equality — the state of being equal in political, economy and social rights.
Equity — fairness or justice in the way people are treated. (Merriam-Webster)
I watched with great interest the summer Olympic Games in Rio de Jeneiro, Brazil. I noticed how different the starting line positions were for the various track and field races. With the exception of the shorter sprints where runners started at the same point, in longer races runners had staggered starts. Seeing runners line up staggered—each starting from a different position along the track, yet all striving for the finish line, provided a helpful analogy for how one might depict the quest for equity in education.
For races in which each runner has to stay in [their] lane, the semicircles at the two ends of a 400 meter track normally force outside lane runners to travel further distance. However, with a staggered start every athlete is provided an equal chance to win because no single runner has an advantage. It is believed that those in the inside lanes gain an advantage by seeing the rest of the field ahead of them, but this is balanced out by these runners plotting a tighter curve.
This visual metaphor illustrates a major difference that exists among sprinters, runners and marathoners that reminds me of the staggered start that students have as they enter into and travel along their educational journey. Like runners assigned various starting positions along a track, students—and those who advocate on their behalf, have to seek equity by making adjustments to their position along the way. Some students show up ready to learn, have more opportunities and support, and we can see how an equitable output does not slow down their race to learn. Rather, they could start at the beginning with other students staggered along the track such that all runners would cross the finish line regardless of how they started the race. The hypothesis of equity is used to celebrate athletes as they receive their medals; crossing the finish line together is impactful for all who participate.
A desire among educators to close achievement gaps and generate uniform student outputs—such as academic performance, standardized test proficiency and improved graduation rates, and post-secondary degree attainment, or even for the less well defined career-readiness—led to an effort to strive to provide equal resources to all students regardless of need. It is the notion that if we pour equal amounts in, then equal outputs will flow out. The flaw in this theory is that it assumes all students come from the same level starting place.
“Unfortunately, methodological difficulties with respect to the use of and inquiry into alternative models of education and change have contributed to overconfidence in an over commitment to the input-response-output model, and it has moved into realms where it simply is overextended or quite inappropriate” (Goodlad, p. 211).
We must be very careful not to conflate the nature and notion of equality with the necessary embrace of equity. Equity in the context of education has developed a conceptual framework that purports to embrace the possibilities of an education for all students regardless of where they begin schooling, what their needs are or how far they are along the performance continuum. The flexible format that is necessary to get equitable outcomes for all students requires adjustment in the vision, time and commitment offered in support of increasingly diverse student populations. This will enable all students to cross the education finish line even when not all students have the same needs or advantageous starting position.
When educators have had the courage, talent, belief and expectation to choose and deliver on equity inputs for learning, they are recognizing charted directions that have led to proven equality outputs. As many educators and parents can attest, we pour all of our hope, our inputs into children, and expect results in better outputs in addition to the best possible outcomes. And, we know that we have limited time to impact these precious lives. As educators, we must do more than hope. We must work toward, advocate for and do more to build equity into our daily practice and reform models. From the perspective of parents who are the champions of their children, a new mindset must emerge. Uneven inputs will be needed so that all children will continue in the education pipeline because they will have been supported throughout their educational journey in ways that maintain the equity that ensures equality and opportunity for every student we are privileged to serve.
Equitable inputs with respect to education throughout the P-16 pipeline for children might be useful for society when equality outputs are produced. However, when the interest, skills, knowledge and learning are compromised, the possibilities for growth, achievement, talent and the high expectations we have for an educated populace are limited.
It may take a deft distinction of subtlety and in-depth discovery to differentiate effectively between equity and equality when bonded by race, gender, and learned behaviors.
I find myself having arrived at our destination in education, yet I am caught in a quagmire that prevents me from often seeing the distinction between equity and equality in educational politics—school districts, administrators leading, teaching, believing and acting on beliefs that are direct and indirect challenges that lie beneath the surface of our profession. We limit ourselves by restricting ourselves, and worse, limit children, by viewing the prescription for ensuring educational equity as a simple solution of equal application of the same interventions, supports or funding for all schools and students regardless of circumstance, challenges, resources or starting advantage. Diversity in thoughts and actions, as well as supports and interventions, represent the potential for supporting equity. Equality, an equal application of the same resource regardless of need and irrespective of mitigating circumstance—is not fair. From the perspective of educators who must champion the beliefs and attitudes of equity of education to fulfill the promise for all the children, educators must be willing to breach the barriers to live up to the current dilemma in our profession. All children and families are entitled to equitable education options that permeate the P-16 pipeline and if we choose this pathway, we will produce equal educational outcomes for the children we are entrusted to serve.
From a historical perspective, state and federal legislation seem to have set aside the critical input necessary for successful local districts and schools in the areas of equity, and the full implementation of civil rights legislation. Frazier (1983, p. 116) concludes that: “It is a time for selflessness and a willingness to forego those elements geared to enhance or protect any one group or governance level. The emphasis must be on improving the quality of the educational network and generating a synergistic pattern that will be repeated and valued by all those committed to maintaining an effective public education system in their country.
Research tells us that a significant expectation for an equitable output would entail supportive instruction and interaction between the educator and the student which must embrace three factors: time, resources and positive experiences. This is the bedrock for a solid learning experience where equity matters significantly. Equity paves the way for excellence.
Frazier, Calvin M. (1983). The 1980’s: States Assume Educational Leadership in The Ecology of School Renewal, The University of Chicago Press, Chicago, Illinois.
Guralnik, David. B. (1961). Webster’s Dictionary of the American Language-The Everyday Encyclopedic Edition. Copywrited by the World Publishing Company.
Goodlad, John I. (1975). The Dynamic of Educational Change. McGraw-Hill Book Company: New York.
Johnson, James, Cummings, Jay, et.al (2013). Getting to Excellence. Authorhouse: Bloomington, IN. |
Graph the parabola
y = (x + 4)2 - 4
Notice that this is the vertex form of the equation of a parabola which is generally expressed as follows:
y = a(x - h)2 + k ,
where (h, k) is the vertex of the parabola and 'a' determines whether the parabola opens upward (if a>0, or positive) or downwards (if a<0, or negative).
For the equation of the parabola in question,
y = (x + 4)2 - 4 ==> y = (x - (-4))2 + (-4)
we find that its vertex is at (h, k) = (-4, -4) and its graph opens upwards since a = 1 > 0 (that is, since a is positive).
To graph the parabola, we plot the vertex (-4, -4). Since it opens up, we know that the range of the parabola consists of all values of y such that y>-4 (i.e., the minimum value of y is -4). So you can either find points by plugging in a few values for y that are greater than -4 to find their x values, or you can find values of y by plugging in a few values of x. Also note that, when given the equation in vertex form, the axis of symmetry of the parabola is the line x = h, which in this case would be x = -4.
Unless you need to plot specific points, you can typically just plot the vertex and draw a general form of the graph given that it opens up or down. |
Article 1 Section 2 of the Constitution affords Congress the power to apportion representatives “among the several States…according to their respective numbers.” For the House of Representatives, representation is based on population, so a given state can gain or lose seats depending on its population growth rate. However, since the United States population as a whole as grown rapidly since its independence, what becomes important for representation is a state’s growth rate relative to the growth rates of other states. From 2000 to 2010, only one state – Michigan – actually lost population, but eight other states lost representation because they did not grow as fast as the rest of the country.
The population changes are measured by a national census. The census, which the Constitution requires be taken every ten years, is carried out by the United States Census Bureau. Upon completion, the census apportions the number of Congressional seats a state will have in the House of Representatives for the next decade. The actual drawing of the districts is left to the states, but is subject to federal regulations. Currently, every state has the district lines drawn by either the state legislature or a specific redistricting commission. The federal regulations stem mostly from the Supreme Court. As a result of the “one person, one vote” cases of the 1960s, the Supreme Court ruled that districts must be drawn with as close to equal populations as is “practicable.” The Court also expects that districts be contiguous and compact. The lines must not close off one part of a district form the rest of the district, meaning that it is possible for one to walk from any point in the district to another without temporarily leaving the district. Also, whenever possible, line-drawers must avoid awkward and sprawling geometric shapes for the districts.
Redistricting is also governed by the Voting Rights Act (VRA), passed by Congress in 1965, and subsequent amendments. The VRA requires that areas with a history of voting discrimination pre-clear any changes in voting procedure, and allows lawsuits to be filed in districts suspected of being designed with discrimination in mind. VRA Amendments of 1982 encourage the creation of minority-majority districts. |
(Hover your mouse over bolded terms to see their definition.)
Captain Karl Fredrick Darensbourg, a German-speaking Swedish soldier, left France on the Portefaix on March 7, 1721, bringing with him three hundred German-speaking Swiss and Alsatian colonists bound for Louisiana from the Alsace- Lorraine area. When they arrived in Old Biloxi on June 4, 1721, Bienville appointed Darensbourg commandant. On December 15, Governor Bienville issued an order decreeing all owners of longboats and flatboats to surrender their vessels to the colonial administration. In January 1722, these vessels would transport the colonists to the settlement on the coast, west of New Orleans, where they joined colonists already in the villages of Hoffen, Marienthal, and Augsburg. These engagés became concessionaires and were provided small land grants with no ownership rights. Darensbourg’s concession was named Karlstein in his honor. This area became known as Côté des Allemands or the German Coast. Darensbourg brought the news to the colony that Law’s plan had failed. This news was of great interest to residents of the colony. Historians have noted how ironic it is that the same settlers who brought the news of Law’s company’s collapse are the ones who were successful in settling the colony. They have also noted that the Swiss played an important role in the colonization of Louisiana in particular on the German Coast.
The new company had no accommodations for the arrival of the immigrants. They were without food, shelter, or any means of transportation. They had no horses or plows. These German pioneers faced unbelievable hardships in their new country. The land was a tropical to semi-tropical forest covered with thick underbrush. Using the indigenous trees and brush as lumber brought on the problem of stumps and their removal. Not until ten years after their arrival did they even have a horse in the settlement to lend assistance. Consequently, many succumbed to these early hardships. Professor J. Hanno Deiler believes many more would have perished had they not come from such hardy German stock.
This text is © copyright material by Marilyn Richoux, Joan Becnel and Suzanne Friloux, from St. Charles Parish, Louisiana: A Pictorial History, 2010. |
Social Emotional & Behavioural Difficulties (SEBD)
The term Social Emotional Behavioural Difficulties (SEBD) covers a wide range of emotional problems. It is an umbrella term that describes a range of complex difficulties including emotional difficulties and complex mental health issues. These can include adjustment disorders, anxiety disorders and obsessive compulsive disorder (OCD) among many others.
The Special Educational Needs (SEN) code of practice describes SEBD as a learning difficulty where children and young people demonstrate features of emotional and behavioural difficulties such as being hyperactive and lacking concentration; having immature social skills; being withdrawn or isolated; displaying a disruptive and disturbing nature; presenting challenging behaviours arising from other social needs. The term can therefore cover a wide range of educational needs, and can also include children whose behavioural difficulties are less obvious, for example anxiety, self harming, depression or phobias - as well as those whose emotional well-being appears to be deteriorating.
Children with SEBD can develop learning difficulties because their ability to cope with school relationships and routines is affected. Although difficulties can pose a barrier to learning, SEBD can affect those of all intelligence levels and abilities. For some, their behavioural problem may cause them to be excluded from particular activities which can hamper learning. In some cases, having a learning difficulty can lead to or worsen behavioural difficulties, for example children may develop disruptive behaviour in order to draw attention away from their inability to follow what is going on in lessons.
There is no one cause of SEBD or automatic link between specific social factors and SEBD. There is evidence that prevalence varies according to gender, age and family income level. It is higher in socially deprived inner city areas and tends to affect more boys than girls.
Some organisations that can offer advice and support: |
After the Treaty of Waitangi was signed in February 1840 at Waitangi, across the bay, relations between the Ngāpuhi and Pākehā (used by the Ngāpuhi to mean British European) began to deteriorate. Hone Heke, a local Māori chief, identified the flagstaff flying the Union Jack above the bay at Kororareka as the symbolic representation of the loss of control by the Ngāpuhi in the years following the signing of the Treaty. There are a number of causes of Heke's anger, such the fact that the capital of New Zealand had been moved from Okiato (Old Russell) to Auckland in 1841, and the colonial government had imposed customs duties on ships entering the Bay of Islands, these and other actions of the colonial government were viewed by Heke as reducing the trade between the Ngāpuhi with the foreigners. Traders in the Bay of Island also ferment trouble by saying that flag-staff, flying the Queen's flag; showed that the country [whenua] was gone to the Queen, and that the Ngāpuhi were no longer their own masters, but slaves to Queen Victoria.
The flagstaff that now stands at Kororareka was erected in January 1858 at the direction of Kawiti's son Maihi Paraone Kawiti; with the flag being named Whakakotahitanga, “being at one with the Queen. As a further symbolic act the 400 Ngāpuhi warriors involved in preparing and erecting the flagstaff were selected from the ‘rebel’ forces of Kawiti and Heke – that is, Ngāpuhi from the hapu of Tāmati Wāka Nene (who had fought as allies of the British forces during the Flagstaff War), observed, but did not participate in the erection of the fifth flagpole. The restoration of the flagpole was presented by Maihi Paraone Kawiti was a voluntary act on the part of the Ngāpuhi that had cut it down in 1845, and they would not allow any other to render any assistance in this work. The continuing symbolism of the fifth flagstaff at Kororareka is that it exists because of the goodwill of the Ngāpuhi.
Auckland is the commercial capital of New Zealand but anything to the North can be considered "Northland". Preferred destination for tourists and workers alike when surf, sand, diving and fishing is involved. It has a solid rural base of Dary, Beef and Sheep as well as horticulture and wineyards. Whangarei is the regional city on the way up to Cape Reinga at the northernmost tip of the island. |
The statement “Complex experiments, such as ALICE, generate huge amount of data that need to be processed and analyzed” cannot be characterized as revelatory. But, it can trigger an interesting question, namely “How does this processing and analysis work?”. This brings up more questions such as: Does each physicist have a supercomputer designed for such "noble tasks" under his desk? Or rather the opposite, a regular laptop is more than enough? Are the computer programs physicists use a scientific secret or are they publicly available and free? And... what do those programs exactly do and how do they work?
2. A brief history of data analysis
When the High Energy Physics (HEP) experiments started, physicists did not have an easy task analyzing the recorded data to extract physics results. For some detectors, such as cloud and bubble chambers, films, similar to those used for cinema movies, were employed to record the trajectories of particles as they were traversing the sensitive material of the detectors. Scientists were sitting with rulers, protractors and other geometric tools to analyze such images. The analysis took a lot of time and was possible only when the number of particles (visualized as tracks) was small - they had to be all visible in one image! Fortunately, together with the increasing energy of accelerators, came the development of computers. This allowed digitizing the obtained data and automatizing many of the procedures. This is valid not only for high energy physics, but for science in general. We develop the algorithms and then the computers proceed with the data processing much faster (by many orders of magnitude) than us.
Before electronic data analysis, physicists visually examined photographs of Bubble Chamber particle interactions.
3. Data analysis in ALICE
Different physics experiments used to develop their own software for their data analysis. To provide a basic common functionality an open-access data-analysis framework was developed at CERN: ROOT. It is an object-oriented software package, written in C++, originally targeted to particle physics data analysis. By now, almost all big HEP experiments in the world use ROOT as basis for the development of their own software. Naturally, many of the actual computations are experiment specific, for example we need to take into account the geometry of our detector in the reconstruction process. So, it is necessary for each experiment to use their own specific algorithms.
<200> "AliRoot is organized in a modular way - for example each detector group develops its own software which is then integrated, as a mostly independent module, in the whole system"200>
In ALICE we developed AliRoot. It contains ALICE-specific libraries which are added to standard ROOT. These include software packages for several functions needed in the experiment in the process of data analysis, from simulation and reconstruction to final extraction of physics results. AliRoot is organized in a modular way - for example each detector group develops its own software which is then integrated, as a mostly independent module, in the whole system (that is why in the structure of AliRoot one can see directories like “TPC”, “TOF”, “VZERO”, etc.). Similarly such structure is organized for physics analysis packages. Each Physics Working Group (PWG) has its own directory, where different Physics Analyses Groups can add their code for data analysis. Each module can be excluded from AliRoot not affecting the core of the package. Of course every new piece of software that is being added to the system must be checked for correctness and obey strict ALICE computing rules.
ROOT is a framework for data processing. Every day, thousands of physicists use ROOT applications to analyze their data or to perform simulations
Some collaborations restrict access to their data analysis software for their members only. In the case of ALICE, AliRoot is publicly available and free for everyone to use (the SVN repository on the Web is easily accessible). Moreover, the policy of the experiment is that all software and procedures developed and used to obtain any published physics results should be incorporated into AliRoot, so that results can be reproducible and possible to be studied by everyone interested, also from people outside the collaboration.
4. PROOF and GRID
As mentioned in the beginning, computers are fast, much faster than humans. They do what we tell them to do, in a repetitive way, usually with no errors. However, the amount of data that is being delivered by the LHC is so huge, that data storage and data processing have become a formidable challenge. These are the main problems with which we must cope in our everyday work. What is the solution? Fast internet and parallelism.
The petabytes of data, which are recorded by ALICE, cannot be stored in one place. Therefore, we use fast internet connections to send them to many computer centers in the world. These data are then made accessible to users around the world for physics analyses.
Parallel computing can be introduced at different levels. The first level is the multicore processors that every modern computer is equipped with nowadays. At a second level one can employ clusters of computers - computers which are connected with fast LAN connections and run the same environment. For parallel data processing, the ROOT framework provides a package called PROOF (Parallel ROOT Facility), which can be installed on any computer cluster (for example you can do it at your institute). The third level is the GRID - computer clusters spread around the world connected to each other with fast internet connection. That means the analysis (in the language of computers, the “job”), can be executed on different computers all around the globe.
<200> "for contemporary experimental physicists fast connection to the internet is much more important than having better computers on their desks, because most of the computations are done somewhere in the world, in dedicated computer centers"200>
And what more, physicists do not need to know where the job is executed. From our point of view it does not matter whether the job was executed, for example, on a computer in the US
which analyzed data from Hiroshima. The important part of the process is getting the result - and that we can also do from anywhere (the only need is a fast internet connection).
ALICE GRID sites all over the world
We can now answer the question posed at the beginning: for contemporary experimental physicists fast connection to the internet is much more important than having better computers on their desks, because most of the computations are done somewhere in the world, in dedicated computer centers.
The management and analysis of the huge amount of data collected by the ALICE experiment is far from being an easy task. AliRoot, the experiment's dedicated data analysis software, enables data processing in a fast and efficient way. In addition - due to its public availability – it allows sharing information with people all over the world. |
Some 200 million years ago half of all life on Earth went extinct, thus providing a window of opportunity for the dinosaurs to evolve in now unoccupied niches and dominate the planet for the next 135 million years. Curiously enough, after the dinosaurs were at their own term wiped out by a calamity – presumably at the hand of an asteroid impact – our own genus, the mammals, rose up and filled the gaps. Recognize a pattern here?
Anyway, let’s head back to a time when dinosaurs were yet to become the dominant species on Earth, in the End-Triassic, some 200 million years ago. During this time a massive extinction known as the End-Triassic Extinction (ETE) brought doom upon more than half of all life in the world. A new study from a team of geologists links the abrupt disappearance of approximately half of Earth’s species to a series of massive volcanic cataclysms that took place roughly at the same time.
“This may not quench all the questions about the exact mechanism of the extinction itself.” said the study’s coauthor Paul Olson, a geologist at Columbia University’s Lamont-Doherty Earth Observatory. “However, the coincidence in time with the volcanism is pretty much ironclad.”
Did the extinction come after the volcanic cataclysm or did the volcanic cataclysm come after extinction? Common sense dictates that you’d need a global cataclysm to occur first in order to cause such a widespread extinction, but science isn’t about what seems reasonable, but about what you can prove. For years scientists have suggested that the ETE and at least four other known past extinctions were caused in part by mega-volcanism and the resulting climate change, however previous studies have been unable to prove this.
The team of international researchers analyzed rock samples in Nova Scotia, Morocco, and even the suburbs of New York City and used the decay of uranium isotopes to pull exact dates from basalt, a rock known to been left by eruptions. Around the time of the ETE, the supercontinent known as Pangaea began to break apart, and as such a series of massive eruptions around the world that lasted for 600,000 years created a rift that ultimately became the Atlantic Ocean.
Previous studies were obstructed from making a sound correlation between the Triassic mass extinction and the volcanic events from around the same time since errors in calculating the eruptions’ timing ranged between one to three million year. After analyzing a key mineral called zircon — found in igneous rocks such as basalt — scientists were able to narrow down their margin of error to 20,000 to 30,000 years, or slightly before the mass extinction event.
“Zircon is a perfect time capsule for dating those rocks,” said Blackburn. “When the mineral crystallizes, it incorporates uranium, which decays over a known time with respect to the element lead. By measuring the ratio of uranium to lead in our samples, we can determine the age of those crystals.”
Findings were published in the journal Science.Click here for reuse options!
Copyright 2013 ZME Science
Enjoyed this article? Join 40,000+ subscribers to the ZME Science newsletter. Subscribe now! |
Researchers at the University of Wisconsin-Madison were able to probe deeper into smokers' lungs by tracking the movement in the respiratory organs of a harmless gas known as helium. Helium can be inhaled and visually detected via the widely used diagnostic technique known as magnetic resonance imaging (MRI), which produces high-contrast images of the body's soft tissues. The use of helium is a departure from traditional MRI, which typically distinguishes body tissues from one another by tracking differences in water content.
"It's one thing to see a disease that was already diagnosed, but another to see changes that no one predicted were there," says lead author Sean Fain, a UW-Madison assistant professor of medical physics. "This approach allows us to look at lung micro-structures that are on the scale of less than a millimetre."
Cigarettes can contribute to the onset of respiratory conditions such as emphysema, bronchitis and asthma. In emphysema in particular, the alveoli - tiny sacs in the lungs that transfer oxygen to blood - gradually break down. Fain and his team therefore reasoned that helium gas molecules are likely to have more space to move around in lungs with fewer functioning alveoli.
Testing that theory among eight non-smokers and eleven healthy smokers with no obvious lung damage, Fain found that the movement or "diffusion coefficient" of helium gas molecules did indeed correlate with how much a person smokes, with greater movement indicating a higher level of lung damage. But a more commonly used imaging technique, known as computed tomography, failed to register a similar correlation.
"Our technique is potentially more sensitive than established [imaging] techniques," says Fain. "This is the first time structural changes have been shown in the lungs of asymptomatic smokers."
COMPAMED.de; Source: University of Wisconsin-Madison |
Proteins contain hundreds to thousands of individual amino acids that are linked together in a chain and then folded into a complex shape. Each protein structure is made up of approximately 21 different amino acids in different combinations.Continue Reading
There are approximately twenty different amino acids that naturally occur in proteins although there are more than 100 amino acids that occur in nature (mostly in plants). All amino acids have a basic structure that consists of carbon, hydrogen, oxygen, and nitrogen atoms. This is the framework of the amino acid that makes up the protein. Protein molecules are important in cells because they play the role of enzymes and help to catalyze necessary reactions for living organisms as well as help to make up the structure of various cells.
Proteins are found in all living organisms. In the early 19th century, the importance of proteins was discovered. Proteins are organ-specific meaning that in an organism, muscle proteins will differ between organs, such as the brain and the liver. They are also species specific meaning that a human's protein will be made differently than a human's protein. The species and organ specificity of protein is a result of the differences in the numbering and the sequencing of the amino acids. When twenty different amino acids are in a chain of 100 amino acids, the amino acids can be arranged in more than 10 to the 100th power of ways.Learn more about Chemistry |
Is a form of additive manufacturing technology where a three dimensional object is created by laying down successive layers of material. 3D printers are generally faster, more affordable and easier to use than other additive manufacturing technologies. 3D printers offer product developers the ability to print parts and assemblies made of several materials with different mechanical and physical properties in a single build process. Advanced 3D printing technologies yield models that closely emulate the look, feel and functionality of product prototypes.
A 3D printer works by taking a 3D computer file and using and making a series of cross-sectional slices. Each slice is then printed one on top of the other to create the 3D object.
Since 2003 there has been large growth in the sale of 3D printers. Additionally, the cost of 3D printers has declined. The technology also finds use in the jewellery, footwear, industrial design, architecture, engineering and construction (AEC), automotive, aerospace, dental and medical industries.
A large number of competing technologies are available to do 3D printing. Their main differences are found in the way layers are built to create parts. Some methods use melting or softening material to produce the layers, e.g. selective laser sintering (SLS) and fused deposition modeling (FDM), while others lay liquid materials that are cured with different technologies. In the case of lamination systems, thin layers are cut to shape and joined together.
Each method has its advantages and drawbacks, and consequently some companies offer a choice between powder and polymer as the material from which the object emerges. Generally, the main considerations are speed, cost of the printed prototype, cost of the 3D printer, choice of materials, colour capabilities, etc.
One method of 3D printing consists of an inkjet printing system. The printer creates the model one layer at a time by spreading a layer of powder (plaster, or resins) and inkjet printing a binder in the cross-section of the part. The process is repeated until every layer is printed. This technology is the only one that allows for the printing of full colour prototypes. This method also allows overhangs. It is also recognized as the fastest method.
In digital light processing (DLP), a vat of liquid polymer is exposed to light from a DLP projector under safelight conditions. The exposed liquid polymer hardens. The build plate then moves down in small increments and the liquid polymer is again exposed to light. The process repeats until the model is built. The liquid polymer is then drained from the vat, leaving the solid model. The ZBuilder Ultra is an example of a DLP rapid prototyping system.
Fused deposition modeling, a technology developed by Stratasys that is used in traditional rapid prototyping, uses a nozzle to deposit molten polymer onto a support structure, layer by layer.
Another approach is selective fusing of print media in a granular bed. In this variation, the unfused media serves to support overhangs and thin walls in the part being produced, reducing the need for auxiliary temporary supports for the workpiece. Typically a laser is used to sinter the media and form the solid. Examples of this are selective laser sintering and direct metal laser sintering (DMLS) using metals.
Finally, ultra-small features may be made by the 3D microfabrication technique of 2-photon photopolymerization. In this approach, the desired 3D object is traced out in a block of gel by a focused laser. The gel is cured to a solid only in the places where the laser was focused, due to the nonlinear nature of photoexcitation, and then the remaining gel is washed away. Feature sizes of under 100 nm are easily produced, as well as complex structures such as moving and interlocked parts.
Unlike stereolithography, inkjet 3D printing is optimized for speed, low cost, and ease-of-use, making it suitable for visualizing during the conceptual stages of engineering design through to early-stage functional testing. No toxic chemicals like those used in stereolithography are required, and minimal post printing finish work is needed; one need only to use the printer itself to blow off surrounding powder after the printing process. Bonded powder prints can be further strengthened by wax or thermoset polymer impregnation. FDM parts can be strengthened by wicking another metal into the part.
In 2006, Sébastien Dion, John Balistreri and others at Bowling Green State University began research into 3D rapid prototyping machines, creating printed ceramic art objects. This research has led to the invention of ceramic powders and binder systems that enable clay material to be printed from a computer model and then fired for the first time. |
Design of the Experiment
Planaria flatworms can be found in fresh water ponds especially during spring. The teacher can get some flatworms in these ponds for the experiment. There are also shops that sell biological supplies including planarian flatworms. The teacher can buy some flatworms in these shops. The flatworms should be kept in glass containers with some water (preferably spring water) before the experiment. The flatworms need water to prevent dehydration.
Before the experiment, the teacher should explain clearly the objective(s) of the experiment to the students. The main objective of course is to demonstrate regeneration.
The major materials to be used in the experiment are petri dishes, scalpel, and spring water.
The class can be divided into groups where each group will set up their own experiment or there would only be one set up for the whole class depending on the number of flatworms or materials available.
The teacher may instruct the students to observe the anatomy of the flatworms and the movement of the flatworms. The teacher can also ask the students to draw and label the flatworms.
The teacher will demonstrate to the class on how to dissect the flatworms into multiple parts. The figure on the right shows the different ways on how to dissect the flatworms. The figure shows that the flatworm can be dissected into five different ways, so at least five petri dishes are needed for each set up. The scalpel is used to cut the flatworms. The teacher should guide the students on cutting the flatworms. The petri dishes should be filled with some water to prevent dehydration on the flatworms. The flatworm should be kept in the laboratory room where light intensity is minimal.
After the dissection, the students will observe the flatworms daily. They should note and record any changes on the flatworms. After few days, they would notice that the flatworms regenerate their lost body parts.
The experiment should be followed by a classroom discussion about the ability of the flatworms to regenerate their lost body parts. The teacher should discuss the relevance of the experiment including its possible applications in the field of medicine. |
The reduced Planck's constant, or h-bar as it's commonly known, is a modified form of the equation that describes the quantization of angular momentum, according to Encyclopaedia Britannica. Angular momentum explains the rotary inertia of an object in motion, such as the Earth on its axis as well as its orbit around the Sun. Planck's constant describes the behavior of photons as discrete packets of energy, or quanta.Continue Reading
Planck's constant is fundamental to quantum mechanics, which holds that light behaves as both particles and waves. Max Planck's discovery was considered radical for the 18th century and contradicted the classical view of the universe, which holds that events proceed predictably in a clockwork fashion. Planck's constant showed that the energy of light is proportional to its frequency in an equation expressed as "E=hv." In reduced Planck's constant, "h" is divided by 2?.
Einstein later used Planck's constant to lay the groundwork for quantum mechanics. Planck did not believe his constant had relevance to the real world and merely explained the absorption and emission of radiation, according to Boundless Chemistry. Einstein used Planck's constant to explain the photoelectric effect, which occurs when light shined on certain surfaces forces the material to eject electrons. This theory eventually became the basis of electronic applications such as solar energy and vacuum tubes.Learn more about Physics |
|LOCATION:||Western Great Lakes around Lake Superior.|
The Ojibwe, commonly known as Chippewa, were woodland peoples who generally remained living in one settlement, unless wildlife became scarce. Birch bark was utilised to make wigwams, canoes and containers. The Ojibwe were farmers who grew corn, beans, pumpkins, and squash. As well as this, they lived off wild rice that grew around the edges of lakes, streams, and swamps. Hunting provided vital meat resource.
With a history of being involved in wars, the Ojibwe fought to protect their tribal lands and consequently came to dominate much of present day Wisconsin, Minnesota, Michigan, North Dakota and southern Ontario. However, despite this history, much of their land was lost during the American Revolution after siding with the English who surrendered in 1815.
In modern times, it is estimated 64% of Native Americans live in the United States outside of reservations and tribal trust lands. Many have become urbanised like Ojibwe men Dennis Banks, George Mitchell and Clyde Bellecourt. Yet these men retain their Ojibwe roots by founding AIM, the American Indian Movement, in 1968. |
Kindergarten Emphasized Skills
Counting and Cardinality
- Know number names and count sequence.
- Count to tell the number of objects.
- Compare numbers.
Operations and Algebraic Thinking
- Understand addition as putting together and adding to, and understand subtraction as taking apart and taking from.
- Number and Operations in Base Ten
- Work with numbers 11-19 to gain foundations for place value.
- Identify and describe shapes.
- Analyze, compare, create, and compose shapes.
Measurement and Data
- Describe and compare measurable attributes.
- Classify objects in categories.
Overview of Units:
- Add and subtract within 5 |
Tutorial 10 Shape the Virtual Habitat
In this tutorial you use the Habitat Board to make alterations or full revisions to the virtual spaces where your lesson or unit will take place.
A Note About the Term "Virtual"
The definition and placing of "Virtual Habitat" in the Design / Engage Kit is deliberate. We deliberately have not used the term "technology". Instead, "virtual" means any mechanism for the transmission or manipulation of information. This could be through a computer, or a TV, or a piece of paper or whiteboard. In your virtual design, consider what sort of "information flow" you want to encourage or make possible, and then find the tools or technologies to match.
Step 1 - Ideate
As with Tutorial 9 Shape the Physical Habitat you can start with an ideation process, that is the gathering of a broad range of "what ifs" which you can then cull back, later, to a make a coherent plan.
Browse through the cards below and gather "what ifs" on Post It notes:
The small square cards work just as well for unpacking virtual space as they do in physical space.
This is because the cards represent relationships between people and their environment. In virtual space, as in physical space, this helps us answer meaningful questions about what sorts of interactions you want to nurture. We took our inspiration for the Cave, Campfire and Watering Hole cards from this this article by Prof David Thornburg.
a Cave is reflective space.
- Students could keep a regular blog, with a free tool such as Edublogs.
- A low-tech version could be a written diary.
- You could put a "Cave" card on the Storyboard to make this a routine
a Watering Hole is a space for collaboration
an Empty Space has no agenda - it is waiting to be configured
Consider how classroom displays can be cluttered with old work, signs, etc. Where could you clear display space - once, or routinely so that students could claim and configure that space for their own purposes?
a Maker space has tools for students to create
Maker tools can blur the line between physical and virtual, connecting physical objects with computers and algorithms with powerful effects and incredible scope for invention.
Consider: Scratch as an easy programming language, connected to the outside world via a Makey Makey
an Outside space is beyond the normal spatial boundareies
With technology you can open up a window to the world. For instance:
- use Skype to connect to communities or institutions around the world.
- Google virtual fieldtrips
Step 2 - Sieve Your Ideas
Review your insights on the People board, your plan so far, including if relevant your Habitat Big Idea and physical Habitat design so far. Consider time constraints and technology constraints. Choose a shortlist of some powerful virtual spaces to use in your design.
Step 3 - Create a Virtual Mudmap
Now use the Habitat board to sketch out your ideas and integrate them with the physical space.
In particular, consider whether you could bring all your online spaces together in a central website. You can create a clickable front page for your students.
Step 4 - Test Your Mudmap
Show your sketch to students or colleagues as per Tutorial 7 Prototype s a Bridge to New Insight
Step 5 - Build Your Virtual Home
Once your design is mature enough, you can actually build a web page that brings all your virtual spaces together. You can do this by creating links that go out from the web page to the various other spaces. Students simply have to navigate to the home page and they'll be fine.
If you have a Learning Management System you could use that. At NBCS we use Moodle, and here are some of our course pages. You can see the clickable icons. (You can browse these live via the NBCS Habitat examples page.) |
Copyright © University of Cambridge. All rights reserved.
'Coordinate Patterns' printed from http://nrich.maths.org/
Why do this problem?
encourages the development of team-building skills such as helping others to do things for themselves, responding to the needs of others, encouraging everbody to help each other and explaining to each other by telling how. This is one of a series of problems designed to develop learners' team-working skills. Other tasks in the
series can be found by going to this article
In addition learners will gain experience at using words to describe position and orientation.
A team has four or more members.
You may wish to use an adult as an observer.
Before starting this practical activity you might wish to ask each member of the group to draw their design, ready for when they act as designer for the rest of the team. You may wish to constrain the range of possibilities to manage the difficulty level of the task. For example:
- restricting to the first quadrant or using all four quadrants,
- restricting the number of points in the design,
- only allowing descriptions to use equations of line graphs and given domains (i.e. the line y=3x from x= -1 to x = 5).
The focus of attention for the teams is asking, explaining and helping each other. The completion of the task, whilst rewarding for all concerned, is not the main purpose of the activity.
When teams have finished working on the task it is important that they spend time discussing in groups, and then as a whole class, how well they worked as a team. They can consider what they have learned from the experience and what they would do differently next time, particularly in terms of how to ask questions and answer them effectively. Your own observations, as well as those of observers,
might inform the discussions.
Were there any questions that someone else asked that you found helpful?
How well did you listen to others in your group?
How easy was it to use the answers to the questions that others asked?
Was there an answer that you found particularly helpful?
Other team-building tasks can be found by going to this article
Learners may like to try one of the other 'Explaining How' tasks. Other team-building tasks can be found by going to this article |
Levers at their most basic are a rigid beam attached to a fulcrum, or pivot point, and are very prevalent in everyday life. For example, doors and scissors as well as far more complicated things such as hydraulic lifts and engines all incorporate various types of levers.Continue Reading
Examples of levers that are encountered in everyday life include suspension and steering systems; anything with a hinge or ratchet incorporated into it; anything that is opened, such as a door or a jar of pickles; or even the human body with its network of bones, serving as beams, joints, analagous to fulcrums, and muscles, the equivalent of applied force, working together. Anything that incorporates a fulcrum and a rigid beam with an applied force in order to move a load is one of three types of levers depending on the location of the fulcrum, load and applied force relative to one another.
One of the six classical simple machines and fundamentally the simplest mechanisms that use leverage to manipulate force, levers have been in wide use since antiquity. During the Renaissance period, the world saw an explosion in the number of inventions utilizing levers in their designs. Levers continue to be widely used in modern times.Learn more about Physics |
Historians trace the origins of Valentine's Day to the ancient Roman feast of Lupercalia, which was celebrated each year between Feb. 13 and Feb. 15. During the feast, men sacrificed a dog and a goat and then used the skins to strike women who believed the whippings would make them fertile. As time went on, Christian leaders superimposed a celebration of the martyred St. Valentine on the feast.Continue Reading
The feast of Lupercalia also featured a tradition in which names were drawn and young men and young women paired off for the duration of the celebration. If they took to one another, they would marry soon after the feast was over.
Meanwhile, in the third century, St. Valentine ignored Roman Emperor Claudius II's ban on marriage for young men in his army. The saint presided over their marriages anyway, an act of defiance for which he was executed. However, the Catholic Church later honored him with his own holiday.
In the fifth century, Pope Gelasius I combined St. Valentine's Day with Lupercalia in hopes of quashing the pagan rituals and drawing more attention to the Church. Despite the pope's intentions, the day continued to be celebrated with romantic love in mind. William Shakespeare further romanticized Valentine's Day in "Hamlet" and "A Midsummer Night's Dream," causing it to gain traction throughout Europe.
In the Middle Ages, the exchange of cards between lovers became a tradition. This practice made its way to the New World, where factory-made cards eventually took the celebration of Valentine's Day to a mass-marketing level.Learn more about Valentines Day |
ASSOCIATION SPOTLIGHT: SICKLE CELL DISEASE
Sickle cell disease is an inherited blood disorder that affects red blood cells and is not uncommon in the African-American community.
One of the questions that is on the current NCHSAA pre-participation medical form has to do with sickle cell disease, so it is important that everyone involved in high school athletics has some awareness and understanding about the disease.
It seems appropriate, then, as part of the information on the NCHSAA website highlighted during Black History Month, that we learn more about sickle cell.
What is Sickle Cell Disease?
People with sickle cell disease have red blood cells that contain mostly hemoglobin* S, an abnormal type of hemoglobin. Sometimes these red blood cells become sickle-shaped (crescent shaped) and have difficulty passing through small blood vessels.
When sickle-shaped cells block small blood vessels, less blood can reach that part of the body. Tissue that does not receive a normal blood flow eventually becomes damaged. This is what causes the complications of sickle cell disease. There is currently no universal cure for sickle cell disease.
Hemoglobin – is the main substance of the red blood cell. It helps red blood cells carry oxygen from the air in our lungs to all parts of the body. Normal red blood cells contain hemoglobin A. Hemoglobin S and hemoglobin C are abnormal types of hemoglobin. Normal red blood cells are soft and round and can squeeze through tiny blood tubes (vessels). Normally, red blood cells live for about 120 days before new ones replace them.
People with sickle cell conditions make a different form of hemoglobin A called hemoglobin S (S stands for sickle). Red blood cells containing mostly hemoglobin S do not live as long as normal red blood cells (normally about 16 days). They also become stiff, distorted in shape and have difficulty passing through the body’s small blood vessels. When sickle-shaped cells block small blood vessels, less blood can reach that part of the body. Tissue that does not receive a normal blood flow eventually becomes damaged. This is what causes the complications of sickle cell disease.
Types of Sickle Cell Disease
There are several types of sickle cell disease. The most common are: Sickle Cell Anemia (SS), Sickle-Hemoglobin C Disease (SC), 
Sickle Beta-Plus Thalassemia and Sickle Beta-Zero Thalassemia.
What is Sickle Cell Trait?
Sickle Cell trait (AS) is an inherited condition in which both hemoglobin A and S are produced in the red blood cells, always more A than S. Sickle cell trait is not a type of sickle cell disease. People with sickle cell trait are generally healthy.
Sickle cell conditions are inherited from parents in much the same way as blood type, hair color and texture, eye color and other physical traits. The types of hemoglobin a person makes in the red blood cells depend upon what hemoglobin genes the person inherits from his or her parents. Like most genes, hemoglobin genes are inherited in two sets…one from each parent.
If one parent has Sickle Cell Anemia and the other is Normal, all of the children will have sickle cell trait.
If one parent has Sickle Cell Anemia and the other has Sickle Cell Trait, there is a 50% chance (or 1 out of 2) of having a baby with either sickle cell disease or sickle cell trait with each pregnancy.
When both parents have Sickle Cell Trait, they have a 25% chance (1 of 4) of having a baby with sickle cell disease with each pregnancy.
Who Is Affected?
In the United States people are often surprised when they learn that a person who is not African American has sickle cell disease. The disease originated in at least 4 places in Africa and in the Indian/Saudi Arabian subcontinent. It exists in all countries of Africa and in areas where Africans have migrated.
It is most common in West and Central Africa where as many as 25% of the people have sickle cell trait and 1-2% of all babies are born with a form of the disease. In the United States with an estimated population of over 270 million, about 1,000 babies are born with sickle cell disease each year. In contrast, Nigeria, with an estimated 1997 population of 90 million, 45,000-90,000 babies with sickle cell disease are born each year.
The transatlantic slave trade was largely responsible for introducing the sickle cell geneinto the Americas and the Caribbean. However, sickle cell disease had already spread from Africa to Southern Europe by the time of the slave trade, so it is present in Portuguese, Spaniards, French Corsicans, Sardinians, Sicilians, mainland Italians, Greeks, Turks and Cypriots. Sickle cell disease appears in most of the Near and Middle East countries including Lebanon, Israel, Saudi Arabia, Kuwait and Yemen.
The condition has also been reported in India and Sri Lanka. Sickle cell disease is an international health problem and truly a global challenge. |
A major impact of the Ancient Greeks and Romans on Western Civilization can be seen in the United States and many other Western governments in the democratic process. The democratic process in Western civilizations is a direct result of the original democracy, Athenian democracy.Continue Reading
Democracy comes from the Greek word "demokratia," which means "rule by the people," and was introduced by an Athenian leader in 507 B.C. In this first democracy, the rule was split into three bodies much like the three United States governmental branches. The first body created the laws and ruled over foreign policy. The second body was a council that was created by people from different tribes, and the third body was a court system that allowed people to bring cases in front of jurors that were selected randomly from the citizen population.
This democracy that was present in Athens did more than set a standard ideology that many Western civilizations would adopt later on. It also developed the idea of what it means to be a citizen in a democracy. The people were all expected to participate in current affairs by voting, by serving within the government and military as well as recognizing that all the people had equal rights. The Romans adopted this democratic process after the Greeks. This idea of the citizen's role in a nation stems from this Ancient Greek and Ancient Roman way of life.Learn more about Ancient History |
Gaze upwards on a dark night, and you will see the ‘Milky Way’, a glowing ribbon made up of many individual stars. This is our first clue to the fact that that stars are not uniformly distributed throughout space, but clustered together to form galaxies.
Galaxies can vary in size between a hundred thousand and three thousand billion solar masses, and can be broadly split into different classes, depending on their shape. Our own Milky Way Galaxy contains about two hundred billion stars. It is a spiral shape, which is why, when viewed from our position about two thirds of the way along one of the arms, it appears as a band across the sky.
Gazing out into the Universe reveals thousands upon thousands of galaxies.
These galaxies tend to be grouped together in clusters, and these clusters in turn are grouped into superclusters. It appears the Universe has structure on many different scales.
Spiral Galaxies – are distinctive, not only because of their shape, but because they tend to contain high numbers of brighter, younger stars. The two arms of a spiral galaxy are often rich in star forming regions. They also contain much interstellar material, visible as reddish emission nebulae or darker dusty clouds. In some spiral galaxies, yellow, ‘fossil’ arms of older stars are also visible.
Barred spiral galaxies have disctinctive ‘bars’ which extend between the core and the spiral arms. Many spirals show this feature to some extent, and the distinction between barred and unbarred galaxies is not always a clear one.
Elliptical galaxies – are the most numerous type of galaxy in the Universe. They tend to consist mostly of older stars, and have little interstellar material. Although individual stars within the galaxy may rotate around the galactic core, the motion of the stars as a whole is random, and elliptical galaxies usually have no net angular momentum. The smallest and the largest known galaxies are all eliptical.
Irregular Galaxies – Nearly a quarter of all known galaxies have little or no discernable structure. Many such galaxies have probably become irregular in shape as a result of gravitationl interactions with other galaxies nearby.
Lenticular Galaxies – Often described as ‘spirals without spiral structure’, lenticular galacies are flat discs of stars. They usually consist of older stars, and have few star-forming regions
Radio Galaxies-All galaxies emit radio waves. However, the term radio galaxy is used to describe galaxies that emit particularly strongly (i.e. 1023 Watts or greater) in the radio regions. Radio galaxies are classified as extended or compact, depending on whether the region of radio emission is greater or smaller than the visible region. Many radio galaxies have large jets or streams of radio matter that extend far out into space. |
Beach erosion, often referred to as coastal erosion, occurs when the area’s sand is washed into the ocean. Beach erosion is a constant process, and the persistence of a beach depends upon local rivers and streams to transport more sand to the area. If the area loses more sand than it gains, it begins to shrink over time.Continue Reading
Beach erosion occurs because of wind, rain and waves. Strong storms in the ocean can cause serious damage in a short period of time. As most major rivers in the United States have been dammed, the amount of sediment that reaches the ocean is very low. Accordingly, most U.S. beaches are slowly disappearing. Climate change, which causes the sea level to rise, also influences the rate of erosion.
When the beach begins disappearing, there are a few things that humans can do to protect it. Sea walls and other structures often serve as a temporary solution, but they ultimately cause different problems. A better solution is to protect naturally occurring barrier islands. These islands weaken the waves and storms that contact the beach, which reduces the amount of sand that is washed away. Another alternative is to increase the amount of vegetation on the beach, as tree and plant roots help to stabilize the sand.Learn more about Erosion & Weathering |
In this article we will learn what is Hall Effect and how Hall Effect Sensors work. You can watch the following video or read the written tutorial below.
The Hall Effect is the most common method of measuring magnetic field and the Hall Effect sensors are very popular and have many contemporary applications. For example, they can be found in vehicles as wheel speed sensors as well as crankshaft or camshaft position sensors. Also they are often used as switches, MEMS compasses, proximity sensors and so on. Now we will go through some of these sensors and see how they work, but first let’s explain what is the Hall Effect.
What is Hall Effect?
Here’s the experiment that explains the Hall Effect: If we have thin conductive plate, as illustrated, and we set current to flow through it, the charge carriers would flow in a straight line from one to the other side of the plate.
Now if we bring some magnetic field near the plate we would disturb the straight flow of the charge carriers due to a force, called Lorentz Force (Wikipedia). In such a case the electrons would deflect to one side of the plate and the positive holes to the other side of the plate. This means if we put a meter now between the other two sides we will get some voltage which can be measured.
So the effect of getting a measurable voltage, as we explained above, is called the Hall Effect after Edwin Hall who discovered it in 1879.
Hall Effect Sensors
The basic Hall Element of the Hall Effect magnetic sensors mostly provides very small voltage of only a few micro volts per Gauss, so therefore, these devices are usually manufactured with built-in high gain amplifiers.
There are two types of Hall Effect sensors, one providing analog and the other digital output. The analog sensor is composed of a voltage regulator, a Hall Element and an amplifier. From the circuit schematics we can see that the output of the sensor is analog and proportional to the Hall Element output or the magnetic field strength. These type of sensors are suitable and used for measuring proximity because of their continuous linear output.
On the other hand, the digital output sensors provide just two output states, either “ON” or “OFF”. These type of sensors have an additional element, as illustrated in the circuit schematics. That’s the Schmitt Trigger which provides hysteresis or two different thresholds levels so the output is either high or low. For more details how the Schmitt Trigger works, you can check my particular tutorial for that. An example of this type of sensor is the Hall Effect switch. They are often used as limit switches, for example in 3D printers and CNC Machines, as well as for detection and positioning in industrial automation systems.
The gap between the sensor and the teeth of the disk is very small so each time a tooth pass near the sensor it changes the surrounding magnetic field which will cause the output of the sensor to go either high or low. So the output of the sensor is a square wave signal which can be easily used for calculating the RPM of the rotating shaft. |
Back in the 1970’s I taught a high school social studies course called “War and Peace Studies.”
A recent email exchange reminded me of a simplified version of the Prisoner’s Dilemma that I created for use in the classroom.
The Prisoner’s Dilemma is a fundamental exercise in game theory and serves as a great catalyst for discussions about decision making, communications, ethics and responsibility.
First, the classic example of the Prisoner’s Dilemma from Wikipedia:
Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated the prisoners, visit each of them to offer the same deal. If one testifies for the prosecution against the other (defects) and the other remains silent (cooperates), the defector goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?
How I adapted for classroom use
Students were divided into two separate locations. (Group A and Group B). Once divided, I managed the game – shuttling between the two rooms. Both groups were given the same goal – “To accumulate as many points as possible without helping or hindering the other group.” In practice, I found that the point incentive generally faded away as groups just focused on their perception of “winning.”
I ran a series of 10 decision rounds. During each 5 minute round both groups were told make a group decision about the choice one of two colors – red or blue. See results chart below. I did not specify how they were to arrive at the decision within their groups. When each group has completed their decision, I shared results back to each group. As the decision rounds accumulated, players faced the results of cooperation and betrayal.
To add another dimension to the dilemma, periodically (after decision rounds 3 and 6) I invited each group to send a negotiator to a neutral location (usually just the hallway). This was the only communication allowed between the groups. Generally each group was divided over both the instruction to give their negotiator (“bluff ’em” vs “make a deal”) and how to interpret the negotiator’s “report.” Sometimes groups even became mistrustful of their own negotiator.
It usually took about 45-50 minutes to set the game up and go through a series of 7-10 rounds with some negotiation breaks. The homework assignment was to write a reflection “What did I learn about myself during the game?” Loads of great discussion the next day with many great applications to history, current events, group process and ethics.
For great prompts to foster student reflection, see my post “The Reflective Student: The Taxonomy of Reflection.” |
Washington, September 27 : A new study has looked at the hypothetical scenario of what legacy humans will leave in the rocks 100 million years hence.
Conducted by Jan Zalasiewicz, a lecturer in geology at the University of Leicester, UK, the study takes the perspective of alien explorers arriving on earth - their geologists study the layers of rock, using the many clues to piece together its history over several billion years. Zalasiewicz' research unravels the story of moving and changing continents, rising and falling oceans, ice ages, and evidence of life going back many millions of years.
In the story, the alien explorers grow familiar with its phases of change, the rise of great new ecosystems, and occasional catastrophic collapses of life.
But then, they stumble on something quite different in a thin layer of rock: a striking signal of climate changes, extinctions and strange movements of wildlife across the planet.
Following this trail, decoding clues in the rocks takes them to the petrified remains of cities, and finally to the fossilized bones of those, long dead, who built them.
According to Dr Zalasiewicz, "From the perspective of 100 million years in the future - a geologist's view - the reign of humans on Earth would seem very short. We would almost certainly have died out long before then.
"What footprint will we leave in the rocks? What would have become of our great cities, our roads and tunnels, our cars, our plastic cups in the far distant future? What fossils would we leave behind?" he said.
"My study shows how scientists put together clues from the rocks to understand the past, its landscapes and climate, and the nature of the creatures that inhabited it. A thin layer of silt here, a trace formed by a crawling worm there - the clues are often subtle and difficult to read," said Dr Zalasiewicz.
"My study explores which of our structures are likely to leave traces, and what future explorers might make of us and the impact we made on our environment," he explained.
"Looking to the distant future gives us a warning for the present: our activities have already left a significant footprint on the planet, and not a flattering one. It is not too late to limit it," he added. |
Two regions of radiation encircle the Earth. They’re called the Van Allen belts, and they are a pair of dynamic regions of trapped radiation, separated by a void and held in place by the Earth's magnetic field. They protect the planet from the radiation of space and the effects of solar weather.
We’ve known about these two belts since James Van Allen, the eponymous astronomer, discovered them in 1958. It's important that we know as much as we can about the Van Allen belts and how they change, because most of Earth's satellites live in the region.
Two NASA probes detected a third radiation belt, which disappeared a few weeks later. It appears that solar weather caused its formation, and disappearance.
The Van Allen Probes are the second mission in the Living With a Star program that also includes the Solar Dynamics Observatory (and its mascot, our favorite rubber chicken). Launched in August 2012, the twin spacecraft are built to withstand the harsh conditions of the belts they're studying, and have already started to return interesting data.
The discovery of the third Van Allen belt was recently revealed in a NASA press conference. The probes observed it almost immediately after they were turned on to collect data. The observation was so unexpected that the science team made sure to rule out an instrument malfunction. Just as startling as the discovery of a third belt was the observation of its disappearance four weeks later, in the wake of solar activity.
Dan Baker, director of the Laboratory for Atmospheric and Space Physics at the University of Colorado, Boulder, said at the conference that although a third ring has been reported in the past, he believes that this event, referred to as a "storage ring" is "fundamentally different" from previous events.
His colleague Mona Kessel, a Van Allen Probes program scientist, pointed out that we still don't completely understand what's happening in the Van Allen belts: "We're trying to piece this all together right now. stay tuned – we will know more."
But, rest assured, though a whole host of satellites reside in the Van Allen belts, many are capable of being shut down or moved around to protect them from a damaging solar storm headed their way. The astronauts aboard the international space station are also safe: ISS flies below the inner radiation belt. |
It is the debate that has raged ever since the discovery of dinosaurs in the 19th century - just how were they wiped off the face of the planet?
The most popular theory since the 1980s has been that they died after an asteroid strike, but in recent years some scientists have speculated that volcanic eruptions in the Deccan Traps in India may have caused the mass extinction of dinosaurs.
But now a new study from scientists at the University of Leeds has poured cold lava on that theory, and says that the effect of volcanoes on dinosaurs has been over-estimated.
The findings suggest that long-lasting volcanic eruptions called continental flood basalts would probably not have altered global climate enough to trigger a mass extinction.
Dr Anja Schmidt, from the University of Leeds, who led the new research, said: “At the time when the dinosaurs reigned, numerous long-lasting eruptions took place over the course of about a million years. These ‘continental flood basalts’ were not like volcanic eruptions we often see today, with lava gushing from the ground like a curtain of fire.
“Each eruption is likely to have lasted years, even decades, and eruptions were separated by periods without volcanic activity. The lava produced by an eruption of average intensity would have filled 150 Olympic-size swimming pools per minute.”
One common theory was that gases and sulphur from the eruptions filled the atmosphere, stopping sunlight from reaching Earth and dramatically cooling temperatures.
But the study, published online yesterday in Nature Geoscience, states that the cooling effect was not as drastic as originally thought.
Dr Schmidt said: “Scientists have tried to estimate how the temperature of the earth’s surface would have changed. It was less cooling than originally thought - and actually, the majority of animals would probably have coped.”
The team of researchers, from across Europe and the United States, found that the flood basalts would have had to flow for hundreds of years continuously to have had a severe climatic impact on plants and animals.
They used a sophisticated computer simulation of the spread of the gas and aerosol particles to show that the climatic impacts of flood basalts was less grim than scientists had previously suggested.
Dr Schmidt said: “Perhaps most intriguingly, we found that the effects of acid rain on vegetation were rather selective. Vegetation in some but not all parts of the world would have died off, whereas in other areas the effects would have been negligible.”
A University of Leeds spokesman said: “The new findings will challenge the earth sciences community as a whole to re-examine the causes of mass extinctions and the role of volcanism.”
But Dr Schmidt said that there is still a chance volcanoes had an impact on the destruction of dinosaurs - in line with other research completed over the past few years which suggests that the extinction was caused by a ‘double effect’ of an asteroid strike coupled with volcano storms. She said: “There is really, really good evidence that both volcanic eruptions and an asteroid strike impacted at the same time.
“We need to work out how long the eruption periods lasted - it could have been one year, ten years or a thousand years. So far we don’t understand how much damage the asteroid would have done.” |
By definition all atoms have a neutral charge. This means that they have the same number of protons and electrons. Protons are positively charged and electrons are negatively charged. An equal number of positives and negatives will have a net charge of zero.
Each element in the periodic table has its own unique atomic number. That number specifies the number of protons in an atom. For example, Helium's atomic number is 2. That means it has 2 protons. Because the atom is neutral, it also has 2 electrons. Silicon has an atomic number of 14. That means it has 14 protons and 14 electrons.
Your question might also be asking about how to use the periodic table to determine the number of valence electrons in an atom. To do that, simply use the main group numbers. Main group numbers indicate an entire column of atoms. Hydrogen's column is group 1. Boron's is group 3. Oxygen's is group 6. Whatever that group number is, that is the number of valence electrons. For example: Bromine is element number 35. It has 35 electrons. It is found in main group 7, so it has 7 valence electrons. |
Students will conduct field research of a historical site in order to discover a more complete understanding of a time period, as well as a fuller appreciation of history.
Tools And Materials
Students will need copies of a scavenger hunt based upon their previous research. (See previous activity.)
- Before visiting the site, create a scavenger hunt of facts and items students should discover at the site. (See previous activity.) Make sure it includes both items relating directly to the historical site and those relating to the history of the time period relating to the site.
- Photocopy enough copies of the scavenger hunt for each student. On the way to the historical site, distribute the scavenger hunt to each student.
- Students may use the time in transit to complete any parts of the scavenger hunt they are not sure about based on previous class discussions, preliminary research, or prior knowledge. They should also look over the list in order to have a good idea of what they are looking for.
- Once students are at the historical site, tell them that they may work in small groups in order to complete the scavenger hunt. Remind them that they should pay attention to any guides or site personnel first, but that they should then work to complete the scavenger hunt. If students become distracted because they are working the list, remind your students that they will have the opportunity to complete the scavenger hunt after they leave.
- If the trip includes a lunch stop or other break in the tour, you may want to have students compare their scavenger hunts to make sure they all have the same information. Any information which does not match, have students make a note of it and see if they can figure it out during the remainder of the tour.
- Once the trip to the historical site is completed and students have returned to the school, have them do any research they need to complete the scavenger hunt.
Standards from MCREL Standards
Standard 3.1: Understands and knows how to analyze chronological relationships and patterns
- Knows how to diagram the temporal structure of events in autobiographies, biographies, literary narratives, and historical narratives, and understands the differences between them.
- Knows how to develop picture time lines of their own lives or their family's history.
- Understands patterns of change and continuity in the historical succession of related events.
Standard 3.2: Understands the historical perspective
- Knows how to evaluate the credibility and authenticity of historical sources.
Standard 21.6: Applies decision-making techniques
- Secures factual information needed to evaluate alternatives.
- Makes decisions based on the data obtained and the criteria identified.
Standard 22: Working With Others
- Contributes to the overall effort of a group.
- Displays effective interpersonal communication skills. |
Problem 1. (a) First, you likely need to draw a picture of the two bounding surfaces (the part of the sphere and the cone). From this picture one can see that the sphere is “on top” and the cone is “on the bottom.” This lets us describe the solid region using the variable first.
The solid region can be described by:
The and descriptions / inequalities are obtained from our picture and from solving the equations to find that .
The volume is then given by
(b) To actually evaluate this volume, one should change variables. Switching either to cylindrical or to spherical coordinates can make this problem do-able. Here is the cylindrical coordinate way.
(Cylindrical Coordinates). The region can be described by the inequalities
We then apply our change-of-variables formula to compute
Notice that if we compute the integration first we find
and this integral equals
The first of these integrals can be evaluated by applying a -substitution of .
(Spherical Coordinates). This integral is probably easier to do in spherical coordinates. Indeed, the region of integration is easily described by the following inequalities:
The change-of-variables formula gives us
which can be computed relatively easily.
Problem 2. (a) and . The unit sphere is sent to the ellipsoid whose equation is
In particular, stretches the -axis by a factor of , it stretches the -axis by a factor of , and it stretches the -axis by a factor of .
(b) The derivative matrix of is easy to compute. One finds
The determinant of this matrix is .
(c ) To compute the volume of we can change variables using the function . Based on our observations in part (a) we know that if is the unit sphere, then and so we can convert a volume integral over as follows:
This last triple integral computes the volume bounded by the unit sphere , and we computed that to be . Therefore, the volume bounded by the ellipsoid is given by
Problem 3. This was done in class.
Problem 4. (a) The length of is given by
(b) In this problem, the total mass of is given by the line integral . This can be evaluated directly:
This integral equals .
(c ) The work done by the given vector field (along ) is given by
One can compute the dot product in the integrand as follows:
Hence, the total work done is
Problem 5. This follows by applying the definition of gradient and curl. In particular, given such a function we have that . The curl of the gradient is then given by
The resulting vector field is given by
since mixed partials are equal.
Problem 6. (a) This line integral can be computed directly, or one can notice that where . By the Gradient Theorem, we then have
(b) We can apply the same theorem, albeit now to the slightly different curve , whose start- and end-points are and , respectively. We find
Problem 7. Here are the statements.
Gradient Theorem. If , then
Green’s Theorem. If is a region in the plane with a correctly oriented boundary, , then
note: if is parameterized by a function , then some textbooks will notate and rewrite and . Given this notation, the line integral can also be written as
Stokes’ Theorem. If is an orientable surface with boundary curve(s) , then
Here, the line integral is over the oriented curve(s) whose directions are compatible with the choice of unit normal for the surface .
Divergence Theorem. If is a solid region in space with boundary surface , then
where the surface integral is over with an outward-pointing normal.
Problem 8. The area of is given by
If we can find a vector field where then we can apply Green’s Theorem to find
There are many, many such vector fields to choose from. One choice is to use and . We can then compute the line integral by parameterizing the ellipse-region using for . (Note: this parameterization gives the boundary curve the correct direction.) We then have
This integral equals
which can be computed by using a double-angle identity.
Problem 9. In general, the surface area for a surface is given by
where parameterizes . We can parameterize a graph-surface, by simply using
In other words, we are using and . As worked out in our textbook in 11.6 (on page 231), this becomes
or, if we change notation as is done in the book and use and , then we have
For our function we have that and so and . The surface area is then given by
Problem 10. (a) The symbols translate as follows:
(b) The first two are used for line integrals, and the last two are used for surface integrals.
(c ) Again, the surface area of a parameterized is given by
A unit normal is given by
Problem 11. To compute the flux of out of , we need an outward pointing (unit) normal for . This requires us to first parameterize the unit sphere , which we can do as follows:
where and . This parameterization comes from spherical coordinates, thinking of and . When we are on the unit sphere, the only restriction (on spherical coordinates) is the equation , leaving and free to roam within their respective intervals. If we then recall our conversion formulas relating to and set , we obtain the above expression.
A unit normal for can then be computed by evaluating
The computation above requires some time to do, but when it is all said and done (and terms are cancelled), we find
An outward normal is obtained by using the plus sign in the equation above. This gives us a normal, for example, that points up, out of the sphere, at the north pole . We also find
We can then evaluate this flux as
The integrand cancels beautifully; in fact, it equals the constant .
(Slightly different approach) One can also approach this problem by simply writing
and then writing out
One can use these expressions to explicitly compute the flux as
(Another approach) One can also observe that, for the unit sphere, . Note that this expression for the unit normal does not come from a parameterization, but instead comes from viewing the unit sphere as a level set
The gradient of the level-set function is perpendicular to the level set, and so this gradient can be used as a normal. One finds . To make this vector have unit length we divide by to get (which has unit length since the point lies on the unit sphere).
Observe that the outward normal for the sphere and the vector field, , in this problem are the same! We then have
when lies on the sphere. This implies that the flux is given by
This method still leaves one with computing the surface area of the unit sphere, which is probably best done using a parameterization (although some might recall the formula).
(Crazy Cool Approach) We computed the volume of the solid region bounded by the unit sphere in a previous class and found this volume to be . Note that the given sphere, , is the boundary of the filled-in, solid ball, , and so we may use the Divergence Theorem to compute the flux:
Problem 12. (a) To compute this flux directly, we need to first parameterize the surface. This can be done by using the function
where and . This parameterization is motivated by cylindrical coordinates (with and since it is a radius 1 cylinder).
We then compute
The flux integral is then given by
and so equals zero.
Note: a student might try to do this problem by using, say, the divergence theorem. After all, the divergence of this vector field is easy to compute, but this is not applicable since the cylinder is not the boundary of a solid region! If it contained the “top lid” and “bottom lid,” then it would be.
A student might also try to do this problem by using, say, Stokes’ Theorem. To do this, though, one would need to know that for some vector field — however, it is hard to find such a vector field , and, more over, it does not exist since .
(b) is not the boundary of a solid region. A picture explains this.
(c ) The boundary of consists of two curves, and . Each curve is a circle, one contained in the plane and one contained in the plane.
The curve in the plane must be oriented in a counter-clockwise direction to be consistent with an outward normal for . The curve in the plane must be oriented in a clockwise direction, though!
(d) For this problem one can compute the curl directly and compute this flux integral directly. This is a valid way to do this problem since the curl is not that hard to compute.
However, integrating begs us to use Stokes Theorem. It says that
One can parameterize using the function and one can parameterize using the function . Both parameterizations have the domain , but the function traverses in the wrong direction, and so we must adjust our line integral with a negative sign.
Both line integrals equal , and so when we subtract we (of course) find that
Problem 13. We can use the divergence theorem to kill this problem. Observe that the divergence of is given by and so
and by the Divergence Theorem this last integral equals
Therefore the flux out of is |
Turbidites are sedimentary rocks caused by the lithification of turbidite sediments, that is, sediments deposited by turbidity currents. In this article we shall review what is known of their sedimentology, and discuss how we know their mode of deposition.
When a denser fluid flows through a lighter one, the difference in density prevents them from mixing, so that the denser fluid forms a current within the less dense fluid. In particular, turbidity currents in water are currents which are denser than the surrounding water as a consequence of being turbid (loaded with sediment). Because the turbidity current only mixes gradually with the surrounding water, its energy only dissipates very gradually into the larger body of water. This means that a turbidity current can flow for great distances (hundreds of kilometers) as a distinct current within the clearer water. Being denser than the surrounding water, it will flow downhill and along the bottom of the surrounding fluid: one might think of such a current as a sort of underwater river, although the analogy is not quite exact in that a turbidity current can flow up and over obstacles in its path.
The turbidity currents of interest to us in this article are those caused by slope failure, where sediment on the continental slope begins to slide down it, either as a result of a submarine earthquake or simply as a result of sediment accumulating on the slope until gravity alone is sufficient to start it sliding. This initiates a turbidity current, which flows down the slope accelerating as it goes: also, as it flows down the slope, it churns up more turbidity, increasing the difference in density between the current and the surrounding water.
By the time such a current reaches the ocean floor, it can be traveling at upwards of 100 km/h. As we have noted, the dynamics of a turbidity current ensure that it only loses energy very slowly, and so such a current can travel hundreds of kilometers before giving out.
Because these currents carry their loads of sediment at such high speeds, they must surely have a powerful erosional effect: they are thought to be the main cause of many underwater canyons. However, we are more concerned here with their role in the deposition of sediment, which will be discussed in the next section of this article.
Turbidity sediments and turbidites
The sediments deposited by turbidity currents are known as turbidity sediments. The rocks formed from these sediments on lithification are known as turbidites.
At any particular point over which a turbidity current passes, it will start off strong and gradually weaken until its energy is entirely dissipated. The consequence of this will be that the sediment will grade upwards from coarser to finer sediments. How coarse the sediment at the bottom is will depend on the source of the sediment: it may be as coarse as boulders and cobbles, or as fine as sand. The thickness of the deposit is also variable, from meters to centimeters in scale.
Note that the current fails not only over time, but also spatially, as it loses energy the further it gets from its origin. So at the extreme distance from the origin, only mud will be deposited; closer to the origin than that, we would see silt overlain by mud; and so forth.
After the deposition of the turbidity sediments, there will usually be a more tranquil regime of deposition, during which ordinary marine clay-sized articles will be deposited on top of the turbidity sediments proper. The entire sequence of sediments produced by these two mechanisms is known as a Bouma sequence. Note that although the top of the Bouma sequence is not deposited by turbidity currents, the term "turbidite" is used to include the whole Bouma sequence and not just the part of it so deposited.
While the ordinary marine clay in the Bouma sequence will contain organic remains from the deep waters in which they were deposited, the turbidity sediments will typically contain remains from the shallower waters in which they originated, and these remains will typically be fragmented by the violence of the process which transported them. The current-deposited sediments will often display sedimentary structures associated with flow, such as ripple marks. When a fresh sequence is deposited on top of the previous one, the force of the turbidity current will erode the layers of fine clay at the top of the previous sequence, producing what are known as sole marks.
The typical place to find a Bouma sequence is underneath one Bouma sequence and on top of another; although slope failures are intermittent, they are plentiful, and over a sufficiently long period of time great stacks of them will be deposited. The picture to the right shows part of one of these stacks, in lithified form.
Turbidites: how do we know?
How can we recognize the origin of the sediment in these rocks, and conclude that it really was desposited by turbidity currents?
To begin with, offshore drilling on the continental margin finds sequences of unlithified sediments which look just like the sequences of lithified sediment found on dry land. To identify the latter as the lithified counterpart of the former is trivial; and so we can be confident that the lithified sediments were marine in origin and were formed by the same processes as the marine sediments sampled from the sea floor.
But how do we know what those processes were? So far as I know, at the time of writing no-one has ever been at the right place at the right time to see a turbidity current depositing its load of sediment; this is unsurprising, since the phenomenon is intermittent and unpredictable, so no-one knows what the right time is; and the right place is at the bottom of the sea.
For this reason turbidites were for a long time a puzzle for geologists. But when they started taking turbidity currents into consideration, suddenly everything became clear.
Note first of all that turbidity currents themselves are not hypothetical. They can be produced in the laboratory in tanks of water and their action observed. Furthermore, laboratory experiments confirm that the waning of a turbidity current does indeed result in graded sediments, as we would expect. Slope failures are also not hypothetical, and turbidity currents have been observed flowing down the continental slope through marine canyons; it is only the actual deposition of the sediments that has so far gone unrecorded.
We know that whatever leaves these sediments flows along the bottom of the sea, because it leaves ripple marks in the sediment and because it leaves sole marks gouged out of the previous layer of sediment. In order for something to flow at the bottom of the sea it has to be denser than seawater — like a turbidity current is by definition.
One frequently cited observation is the aftermath of the Grand Banks earthquake of 1929. In the hours following this, a number of transatlantic cables were severed. Their position was known, as were the exact times when they were cut. It is therefore possible to say that something capable of severing cables moved from near the epicenter of the earthquake at a speed of approximately 100 kilometers per hour, and that it moved along the sea floor where the cables were laid. A turbidity current with its abrasive load of sediment would be a highly plausible candidate.
We know that whatever process forms the deposits that we're trying to explain must be happening in the present, because we can see freshly deposited turbidite sediments in the present day. But we also know that the process must be intermittent, partly because we can't see any continuous process forming these deposits on the sea floor, and partly because the sedimentology shows the effects of a high-energy current waning to a low-energy current followed by a period of ordinary marine deposition, followed by the same thing happened over and over again. The turbidity currents generated by slope failure would fit this bill.
Moreover, we know of no other cause that could transport such large clasts so far out to sea. This may seem like a mere argument from ignorance, but it gains force when combined with the following argument. We know that there are failures of the continental slope causing currents which are by the nature of their origin turbid. Therefore, these currents must transport sediment and deposit it in some form. If it is not deposited in the form of turbidity sediments, in what form is it deposited and where is it?
The fossils found in turbidites are another important point. The alternation of shallow-water with deep-water fossils was once a baffling mystery. The theory of turbidity currents makes everything clear: the shallow-water fossils are carried by the turbidity current from shallow to deep water, and what was an inexplicable anomaly becomes an expected consequence of the theory.
Perhaps the closest anyone has got to direct observation of turbidite formation is the events in Lake Brienz in 1996. The lake showed distinct signs of an underwater landslip, including a sudden increase in the turbidity of the lake waters, a small (half-meter high) tsunami wave, and the release of a 200-year old corpse from the lake bed. Taking sediment cores from the lake revealed that an abnormal layer of sediment, 90cm thick at its thickest part, had been laid down concurrent with this event: the sediment graded vertically upwards from sand through silt to clay: that is, it looked just like turbidity sediment should, apart from not being marine in nature. Further investigation suggested that the 1996 event was caused by accumulated sediment sliding down the slope of the Aare delta.
In the light of all these facts, it seems to be a safe bet that turbidity sediments are indeed caused by turbidity currents. |
NASA: Saturn moon Enceladus is able to host life – it’s time for a new mission
Ever since studies started suggesting that chemical reactions between water and rock on Saturn’s moon Enceladus could provide enough energy in the water to feed microbial life, scientists have been searching for proof that the right sort of reactions really do occur.
And during its last dive through the icy plumes that Enceladus erupts into space in October 2015, the Cassini spacecraft has finally managed to find it – in the form of molecular hydrogen. The finding, published in Science, means the moon can now be considered highly likely to be suitable to host microbial life. In fact, the results should undermine the last strong objection from those who argue it could not.
Enceladus is a small (502km in diameter) moon with an icy surface, a rocky interior and an ocean of liquid water sandwiched between the two. Cassini discovered back in 2005 that Enceladus is venting water into space, in the form of plumes of ice crystals escaping from cracks in the surface. For a decade, Enceladus was the only icy moon where this was known to happen, but plumes have recently been found on Europa, too, a larger icy moon of Jupiter.
Cassini’s discovery led to it being re-tasked to fly through Enceladus’s plumes. There, in addition to water, it was able to identify traces of methane, ammonia, carbon monoxide, carbon dioxide, simple organic molecules and salts.
Cutaway view inside Enceladus, showing where hot water and rock interact below the ice. NASA/JPL
Eventually, in March 2015, it detected microscopic particles of silica. By then, the composition of the plumes showed almost every sign that ocean water had reacted chemically with heated rock – altering the minerals of the rocky silicate seabed while the water became rich in chemicals.
Presumably, the ocean water is drawn into the rock, becomes heated, reacts chemically, and escapes back up to the ocean via “hydrothermal vents”. These exist on the floor of the Earth’s oceans, too, where the chemically charged water supports a rich ecology of microbes and other, more complex, life forms – requiring no sunlight.
The only missing evidence of water-rock chemical reactions in Enceladus was molecules of hydrogen, which should be released as a byproduct of the water-rock reactions. Searching for hydrogen was a key goal of Cassini’s final and closest dive through the plumes.
The new study unveils how hydrogen was detected during the frantic half-minute when Cassini was about 120km above the surface of Enceladus, whizzing through a plume at 8.5km per second. This was achieved by operating the mass spectrometer (an instrument which knocks electrons off chemical substances and sorts them based on their mass-to-charge ratio) in a special mode. It admitted plume material directly into the instrument’s detection chamber to avoid the possibility of hydrogen being generated by plume-water reacting with the metallic components of the instrument itself.
Hydrogen is of immense significance, because its presence along with hot water and rock would enable simple microbes to make a living. When dissolved carbon dioxide reacts with dissolved hydrogen, it produces methane and water. This chemical reaction releases energy that organisms can use to drive their metabolism. There are many kinds of “methanogenic” organisms at deep sea hydrothermal vents on Earth that do this. Now that we know Enceladus has all the necessary ingredients for this to happen, we are lacking only the proof of life itself.
For that we will need a purpose-built mission, such as the Enceladus Life Finder (ELF). This would collect and analyse any complex organic molecules in the plumes. It is hard to imagine a more important goal for solar system exploration than establishing whether a habitable environment, such as the warm bottom of Enceladus’s ocean, actually does host life.
Enceladus is a long way from Earth. If we were able to prove that it hosts life, it would be highly likely that such life had originated there, independently of life on Earth. That would be a crucial discovery. It would provide evidence to suggest that our galaxy is teeming with life, because if life began independently on two different bodies in our solar system, then surely it also got going on many of the potentially habitable planets that we are now finding around other stars.
Enceladus is a tiny world, and the amount of available energy and nutrients is small. Few scientists therefore expect it to host an ecosystem consisting of more than simple microbes. The much larger Europa, if it has life too, is a better prospect.
How Cassini will end, on September 15, 2017. NASA/Jet Propulsion Laboratory-Caltech
However, to protect Enceladus from the slightest risk of contamination by any terrestrial microbes that accidentally hitched a ride on Cassini, the craft will not be allowed to become a derelict object that might eventually crash onto its surface. Instead, the mission is facing its “grand finale”, a series of 22 orbits in which it will pass spectacularly between Saturn and its innermost ring. This will end with Cassini burning up in Saturn’s atmosphere.
- iCar anyone? Apple receives patents for self-driving cars in California
- Qualcomm takes a bite out of Apple, files countersuit for chip manipulations
- Flipkart raises $1.4 billion in largest Indian e-commerce deal. Acquisition of Snapdeal on the cards?
- Even sex toys can be connected to the internet – and hacked
- Moon over: Stunning shots of Earth's favourite satellite through its 28 phases |
Life has its shapes, and depends on all kinds of architecture. It needs a skeleton on which to hang. Blood vessels in which to flow. A brain to house its thoughts. A heart to give it a beat. On a far smaller scale, it needs cells to accommodate all the various components without which there would be no life in the first place. One of the most important being: DNA. DNA itself is found within a defined structure in each of our cells: the nucleus. Within this protective core, our genetic heritage adopts yet other variable conformations depending on a cell's stage in mitosis. One of these conformations is the well-known chromosome. Chromosomes are simply highly-packed DNA, which is an ideal conformation to be in when a cell is about to divide for instance, and chromosomes need to move around. Many different proteins work in unison to keep chromosomes arranged in such a way. One in particular has recently proved to be important in maintaining the shape of packed chromosomes, and is called proliferation marker protein Ki-67.
Why is it important for chromosomes to keep their shape in the first place? In the process of cell division, mammalian chromosomes evolve from an untidy slack conformation - bearing resemblance to boiled spaghetti - to the "X" conformation characteristically used to illustrate a chromosome. This "X" conformation is achieved when the DNA - or chromatin - of the slack chromosome is packed very tightly, making it far easier for the nucleus to identify chromosomes individually and distribute them correctly into the two daughter cells of a dividing cell. It is not dissimilar to folding clothing items thus making it easier to identify them one by one and tidy them away into drawers. Chromatin-packing may sound straightforward and perhaps even obvious, but it has taken well over a century to come to these conclusions and understand chromosome structure at the molecular level.
The German biologist Walther Flemming (1843-1905) was the first to observe what he termed "chromatin" because it was the substance in the cell nucleus that was "readily stained" under the microscope. At the time though, no one was yet aware of what its individual components were, i.e. DNA and protein. Nucleic acids had indeed been uncovered by the Swiss biologist Friedrich Miescher (1844-1895) in the 1870s already, but the notion of DNA as a "transforming principle" only emerged in the early 1940s and its structure was still to be unveiled. So, though the first half of the 20th century made huge leaps in the field of genetics, little was known on the molecular front.
By the early 1940s, scientists knew that chromatin was a mixture of DNA and histone proteins. Intriguingly though, since DNA seemed to be so monotonous from a structural point of view, the consensus was that the histones must be the carriers of genetic information. This notion soon shifted, however, when the double helix structure of DNA was reported by the American biologist James Watson and the English physicist Francis Crick (1916-2004) in 1953. Finally, during the 1970s and the 1980s, the discovery and the role of what has been called the "quantum of chromatin" was described: the nucleosome. Made of a histone core around which is coiled a given stretch of DNA, nucleosomes form the structural unit of chromatin packaging. They are responsible for its tight packing into the shape of chromosomes, and are therefore of great significance in gene expression.
Proliferation marker protein Ki-67, or Ki-67, does not help to pack DNA into chromosomes, but it does prevent them from losing their shape. How? KI-67 is a nuclear-binding protein in mammalian cells. It is very large, has a very high electrical charge and seems to exist in a long unfolded conformation. Its structure is amphiphilic, meaning that one side is negatively charged, while the other is positively charged - typical features of surface-active agents. Ki-67 is hugely expressed during cell division - the time when the nucleolus disassembles to release its chromosomes and share them equally between two nascent daughter cells. It is precisely at this point that Ki-67 becomes essential by forming a sort of sheath around each mitotic chromosome making sure that everything stays tightly packed and neatly in place.
A closer look at this sheath reveals a dense fuzzy brush-like structure that runs around the contour of each chromosome. On closer inspection, each "hair", so to speak, is formed by a Ki-67 protein, whose C-terminus is attracted to chromatin while its N-terminus prefers the cytoplasm. This creates a sort of ephemeral electrostatic "exoskeleton", a little like a suitcase into which clothes have been stashed for a journey. Once over, the suitcase is opened, and the clothes released. This is precisely what happens to the Ki-67 sheath. When cell division has occurred, and the chromosomes have been distributed between the two nascent cells, the nucleolus reassembles to contain them. The Ki-67 sheath then dissolves, and the chromosomes adopt their untidy slack spaghetti-like conformation as they await the next cell cycle.
So Ki-67 holds chromatin in a highly compacted form while, no doubt, mediating long-range interactions between different parts of the chromosome. Furthermore, since each outside shell has the same electric charge, instead of getting hopelessly tangled, chromosomes bounce off each other! This is a delightful example of the biomechanical role of a protein. This said, though essential, Ki-67 does not work on its own: at least seventeen other proteins are known to be involved in the process too. It could be that Ki-67 targets these proteins to their correct sites in the process of compaction. Time will tell. With all this in mind, it is not surprising that Ki-67 is used as a biomarker in the prognosis of cancer. Cell division is highly controlled, and only certain cells should be dividing at a given time in our body. Ki-67 can be used to tell if the wrong cells have indeed lost control, as can other biomarkers within the nucleolus. In this light, inhibitors of nucleolar functions have been shown to destroy cancer cells selectively, and getting to know them in greater detail will help in the never-ending fight against an affliction that kills far too often. |
The Manila galleon trade is probably more significant in the history of the world as a whole than it is in the history of the United States, but it does have significance for both. The Manila galleon trade contributed to what was arguably the first truly globalized trade network in history. The trade was significant for the history of the United States because it helped bring about the development of California.
In order to understand these impacts, let us first look at what the Manila galleon trade was. This was trade between Manila, in the Philippines Islands, and Acapulco, on the west coast of Mexico. Both the Philippines and Mexico were colonies of Spain.
The Manila galleon trade began in 1565. The Spanish had ruled the Philippines and Mexico for a few decades by that time, but they had not yet found a way to sail east across the Pacific from the Philippines to Mexico. In 1565, they found a region in which winds blew to the east, allowing their sailing ships to go across the Pacific in that direction.
Once this happened, a global trade was created. This was the first time that there had been direct contact between Asia and the Americas. It was not possible for goods to from the Americas to Asia or Europe, from Europe to Asia or the Americas, and from Asia to either the Americas or Europe. This meant that essentially all of the world was tied together to at least some degree by trade. This was a momentous event in world history.
The Manila galleon trade was not as important to the history of the United States. Instead, its connection to US history is somewhat tangential. The winds that blew east across the Pacific often caused the galleons to reach the Americas off the coast of California. The Spanish eventually explored California in search of places that they could allow the galleons to land and refit themselves after the long voyage. This helped to develop California to some small degree. It ensured that the Spanish would have control of California and that their control would pass to Mexico when it became independent. The US would then take California from Mexico in the Mexican-American War. |
One of the strongest points of R in the opinion of many are the various features for creating graphs and other visualizations of data. In thia post, we begin to look at using the various visualization features of R. Specifically, we are going to do the following
- Using data in R to display graph
- Add text to a graph
- Manipulate the appearance of data in a graph
The ‘plot’ function is one of the basic options for graphing data. We are going to go through an example using the ‘islands’ data that comes with the R software. The ‘islands’ sotware includes lots of data in particular, it contains data on the lass mass of different islands. We want to plot the land mass of the seven largest islands. Below is the code for doing this.
islandgraph<-head(sort(islands, decreasing=TRUE), 7)
plot(islandgraph, main = "Land Area", ylab = "Square Miles")
text(islandgraph, labels=names(islandgraph), adj=c(0.5,1))
Here is what we did
- We made the variable ‘islandgraph’
- In the variable ‘islandgraph’ We used the ‘head’ and the ‘sort’ function. The sort function told R to sort the ‘island’ data by decreasing value ( this is why we have the decreasing argument equaling TRUE). After sorting the data, the ‘head’ function tells R to only take the first 7 values of ‘island’ (see the 7 in the code) after they are sorted by decreasing order.
- Next, we use the plot function to plot are information in the ‘islandgraph’ variable. We also give the graph a title using the ‘main’ argument followed by the title. Following the title we label the y axis using the ‘ylab’ argument and putting in quotes “Square Miles”.
- The last step is to add text to the information inside the graph for clarity. Using the ‘text’ function, we tell R to add text to the ‘islandgraph’ variable using the names from the ‘islandgraph’ data which uses the code ‘label=names(islandgraph)’. Remember the ‘islandgraph’ data is the first 7 islands from the ‘islands’ dataset.
- After telling R to use the names from the islandgraph dataset we then tell it to place the label a little of center for readability reasons with the code ‘adj = c(0.5,1).
Below is what the graph should look like.
Changing Point Color and Shape in a Graph
For visiual purposes, it may be beneficially to manipulate the color and appearance of several data points in a graph. To do this, we are going to use the ‘faithful’ dataset in R. The ‘faithful’ dataset indicates the length of eruption time and how long people had to wait for the erupution. The first thing we want to do is plot the data using the “plot” function.
As you see the data, there are two clear clusters. One contains data from 1.5-3 and the second cluster contains data from 3.5-5. To help people to see this distinction we are going to change the color and shape of the data points in the 1.5-3 range. Below is the code for this.
eruption_time<-with(faithful, faithful[eruptions < 3, ])
points(eruption_time, col = "blue", pch = 24)
Here is what we did
- We created a variable named ‘eruption_time’
- In this variable, we use the ‘with’ function. This allows us to access columns in the dataframe without having to use the $ sign constantly. We are telling R to look at the ‘faithful’ dataframe and only take the information from faithful that has eruptions that are less than 3. All of this is indicated in the first line of code above.
- Next we plot ‘faithful’ again
- Last, we add the points from are ‘eruption_time’ variable and we tell R to color these points blue and to use a different point shape by using the ‘pch = 24’ argument
- The results are below
In this post we learned the following
- How to make a graph
- How to add a title and label the y axis
- How to change the color and shape of the data points in a graph |
Anatomical Terms of Movement
Anatomical terms of movement are used to describe the actions of muscles on the skeleton. Muscles contract to produce movement at joints, and the subsequent movements can be precisely described using the terminology below.
As for anatomical terms of location, the terms used assume that the body starts in the anatomical position. Most movements have an opposite movement, otherwise known as an antagonistic movement. The terms are described here in antagonistic pairs for ease of understanding.
Flexion and Extension
Flexion and extension are movements that occur in the sagittal plane. They refer to increasing and decreasing the angle between two body parts:
Flexion refers to a movement that decreases the angle between two body parts. Flexion at the elbow is decreasing the angle between the ulna and the humerus. When the knee flexes, the ankle moves closer to the buttock, and the angle between the femur and tibia gets smaller.
Extension refers to a movement that increases the angle between two body parts. Extension at the elbow is increasing the angle between the ulna and the humerus. Extension of the knee straightens the lower limb.
Abduction and Adduction
Abduction and adduction are two terms that are used to describe movements towards or away from the midline of the body.
Abduction is a movement away from the midline – just as abducting someone is to take them away. For example, abduction of the shoulder raises the arms out to the sides of the body.
Adduction is a movement towards the midline. Adduction of the hip squeezes the legs together.
In fingers and toes, the midline used is not the midline of the body, but of the hand and foot respectively. Therefore, abducting the fingers spreads them out.
Medial and Lateral Rotation
Medial and lateral rotation describe movement of the limbs around their long axis:
Medial rotation is a rotational movement towards the midline. It is sometimes referred to as internal rotation. To understand this, we have two scenarios to imagine. Firstly, with a straight leg, rotate it to point the toes inward. This is medial rotation of the hip. Secondly, imagine you are carrying a tea tray in front of you, with elbow at 90 degrees. Now rotate the arm, bringing your hand towards your opposite hip (elbow still at 90 degrees). This is internal rotation of the shoulder.
Lateral rotation is a rotating movement away from the midline. This is in the opposite direction to the movements described above.
Elevation and Depression
Elevation refers to movement in a superior direction (e.g. shoulder shrug), depression refers to movement in an inferior direction.
Pronation and Supination
This is easily confused with medial and lateral rotation, but the difference is subtle. With your hand resting on a table in front of you, and keeping your shoulder and elbow still, turn your hand into its back, palm up. This is the supine position, and so this movement is supination.
Again, keeping the elbow and shoulder still, flip your hand into its front, palm down. This is the prone position, and so this movement is named pronation.
These terms also apply to the whole body – when lying flat on the back, the body is supine. When lying flat on the front, the body is prone.
Dorsiflexion and Plantarflexion
Dorsiflexion and plantarflexion are terms used to describe movements at the ankle. They refer to the two surfaces of the foot; the dorsum (superior surface) and the plantar surface (the sole).
Dorsiflexion refers to flexion at the ankle, so that the foot points more superiorly. Dorsiflexion of the hand is a confusing term, and so is rarely used. The dorsum of the hand is the posterior surface, and so movement in that direction is extension. Therefore we can say that dorsiflexion of the wrist is the same as extension.
Plantarflexion refers extension at the ankle, so that the foot points inferiorly. Similarly there is a term for the hand, which is palmarflexion.
Inversion and Eversion
Inversion and eversion are movements which occur at the ankle joint, referring to the rotation of the foot around its long axis.
Inversion involves the lateral rotation of the foot, such that the sole points medially.
Eversion involves the medial rotation of the foot, such that the sole points laterally.
Opposition and Reposition
A pair of movements that are limited to humans and some great apes, these terms apply to the additional movements that the hand and thumb can perform in these species.
Opposition brings the thumb and little finger together.
Reposition is a movement that moves the thumb and the little finger away from each other, effectively reversing opposition.
Circumduction can be defined as a conical movement of a limb extending from the joint at which the movement is controlled.
It is sometimes talked about as a circular motion, but is more accurately conical due to the ‘cone’ formed by the moving limb. |
The word ‘secular’ in dictionary refers to things which are not religious or spiritual. The concept of ‘secular’ in fact was first used in Europe where the church had complete control over all types of properties and nobody could use property without the consent of the church. Some intellectuals raised their voice against this practice.
These people came to be known as ‘secular’ which meant “separate from church” or “against church”. In India, this term was used in a different context after independence. After the Partition of the country, the politicians wanted to assure the minority communities, particularly the Muslims that they would not be discriminated against in any way.
Hence, the new Constitution provided that India would remain ‘secular’ in the Constitution, which meant that:
(a) Each citizen would be guaranteed full freedom to practise and preach his religion,
(b) State will have no religion, and
(c) All citizens, irrespective of their religious faith, will be equal. In this way, even the agnostics were given the same rights as believers. This indicates that a secular state or society is not an irreligious society. Religions exist, their followers continue to believe in and practise the religious principles enshrined in their holy books, and no outside agency, including the state, interferes in the legitimate religious affairs.
In other words, two important ingredients of a secular society are:
(a) Complete separation of state and religion, and
(b) Full liberty for the followers of all religions as well as atheists and agnostics to follow their respective faiths.
In a secular society, the leaders and followers of various religious communities are expected not to use their religion for political purposes. However, in practice Hindu, Muslim, Sikh and other religious communities do use religion for political goals. Each political party labels other political parties as non-secular. After the demolition of Babri Masjid structure at Ayodhya in December 1992, a case (popularly called S.R. Bommai case) was filed in a court for the dismissal of the State governments run by the Bharatiya Janata Party (BJP).
The judges constituting the nine-judge bench dwelt upon the term ‘secularism’ and averred that though the term was embedded in the Constitution it was wisely left undefined because it was not capable of any precise definition. Secularism in the Constitution guaranteed equal treatment to all religions, and state governments were to regulate the law in order to enforce secularism.
As such, on legal consideration the plea for dismissal of BJP governments was not accepted. No wonder, some people say that S.R. Bommai’s case in the Supreme Court was just ganging up against one political party (BJP). In a another case involving the Chief Minister of Maharashtra, the Supreme Court had held that an appeal to Hindutva was permissible under the Representation of the People Act.
What was banned was the criticism of the other party’s religion. It may thus be said that secularism for political parties has implied the creation of a vote bank comprising Muslims and the scheduled castes and scheduled tribes. In the elections for the Lok Sabha in May 1996 and for Uttar Pradesh Vidhan Sabha in October 1996, when the BJP emerged as the largest single party at the centre as well as in Uttar Pradesh, political parties with vested interests joined together in describing the BJP as a communal party.
The cry against communalism was raised only to seek votes and attain political power. A coalition of 13 parties at the Centre (in June 1996) was based not on any common acceptable minimum programme but only on one and programme of preventing a so-called “Hindu party” from forming a government.
Communalism, thus, is neither a political philosophy nor an ideology nor a principle. It came to be imposed on the Indian society with a political objective. The communal-secular card is now being played for political motives only. The bogey of communalism is being kept alive not for checking national disintegration but with a view that minority vote bank does not dissipate itself into the larger Indian ethos.
Even those political leaders who are too corrupt and who extensively practise casteism accuse political leaders of opposite parties for being communal. The power seekers thus use secularism as a shield to hide their sins, thereby ensuring that people remain polarized on the basis of religion and India remains communalized. |
Grouping or Sharing?
Lesson 5 of 13
Objective: Students will be able to identify a division word problem as either sharing or grouping and communicate their reasoning.
Today I want to see if my students are able to decipher between the two types of dividing. Working with the CCSS has led me to understand that children need to know this difference in order to understand what they are solving for in a division problem. No longer is it okay for me to just see if they can solve for the right number. Students are expected to explain their reasoning, and so they must understand/ communicate what they are solving for (a group size, or a number of groups)
To begin my mini lesson, I remind the students of some lesson activities that we've done (described below). Then, I ask them to watch the following two clips. Even though the word "sharing" is on the first video, I want them to name what type of division it is and why.
We have worked on various activities where the students divided by sharing and also by grouping. One of the main activities for sharing was our work with "The Doorbell Rang" by Pat Hutchins. In those lessons, the students passed out "cookies" to a certain number of people until they were all out.
To work on the grouping model, we created a cookie production company and the students had to create packages of cookies. We also worked on a marching band project which asked them to configure a band for a parade.
This is such a sweet little clip. It doesn't show the kids sharing with more than one other, but the context of sharing is obvious. (Passing out one at a time from the whole.) Stop the video before the information on hunger if you choose.
I used a video clip of a marching band in another lesson and the students loved it. It is okay to have some fun while showing real world use of a concept. Can you imagine all of the division to make this band show work? If you choose to use this lesson, find your own alma mater or your local high school on YouTube and let the kids enjoy real math.
This next video is a bit long, so I fast forward to the part when the band begins playing and grouping (about 2:30 in). Seeing the band come onto the field can also be interesting, as it comes out in sets.
After watching the clips, discuss WHY they show each type of sharing, rather than labeling the types of division.
For the student practice, I've created several Division Stories that are examples of sharing or grouping division problems. I ask the students to create a 3 by 3 grid in their notebooks and write either "sharing" or "grouping" in each box for a tic tac toe game.
I then read the cards and have them put a marker in a box that describes the cards I read. Obviously, the round ends when someone has tic tac toe. I will have the cards on the board for everyone to see and the "winner" must come up and explain his/her answer. In order to be successful and declare the "win", reasoning must be applied by the student. He or she will need to explain why the stories were sharing or grouping. They may choose to just talk it out or draw a model on the board. The group can disagree if needed!
For independent work, I pass out six stories to the students. They must decide if they are sharing or grouping problems and glue them onto an organizer I downloaded from K-5 Math Teaching Resources.
After they sort they are to solve the problem and show their work. I add to this student work and suggest you also do so. Students are expected to also write at the bottom of the page how they know it was sharing or grouping.
If this lesson is longer than your session allows, doing a couple of these in class and then the rest for practice at home is a nice balance.
This team is trying to put language to their understanding…a complicated thing for a child still developing language (8 and 9 years old). If you really listen, you can hear that they know what they are talking about.
This partnership struggled. They were debating when I approached, but began to agree when I hit record. They are able to respectfully agree and disagree and explain their thinking. |
The first day of autumn descended across the northern hemisphere on 22 September, with the 2012 autumnal equinox. The day heralded the end of summer... one of the hottest on record.
The UK Met Office reports the south-west and northern parts of England (and southern Wales) all enjoyed a significant amount of rain and sunshine in August. However, parts of East Anglia and south-east England were drier than normal (as too were western parts of Northern Ireland and parts of north-west Scotland). Meanwhile, across the Atlantic, the US National Climatic Data Center (NCDC; began keeping records in 1895) classified the month as the 16th warmest on record. From a global perspective, NCDC's data shows it was the fourth warmest in 132 years.
The autumnal equinox will see the days get shorter and the nights longer, as the weather builds up to winter. The opposite holds true for the southern hemisphere, where the autumnal equinox heralds the spring.
A feature note on Nasa's website reads: "It is autumn, the season of change! In the north, the hottest days of summer are past. Each day is shorter than the last. Trees will begin to turn bright colours. Soon it will be time for hot cocoa and warm coats. Far to the south, across the equator, spring has arrived. The days are growing longer. The weather is warmer. Soon flowers will be blooming. They bring the promise of summer's heat and new life."
More specifically, the autumnal equinox is the time when the day and night are of equal length. Theoretically, on this day, the Earth's North Pole points neither at the sun nor away from it. This happens twice a year - September's autumnal equinox, which is the first day of fall north of the equator and the vernal equinox in March, which is the first day of spring in the northern hemisphere.
The autumnal equinox and the first day of fall started on a dry and bright note in London and southeast UK, with "plenty of sunshine" and light winds. However, the east coast received an odd light shower, according to the Met Office. The National Weather Service also predicts mostly a sunny autumnal equinox day in New York and Washington DC, followed by mild showers and thunderstorms around midnight. |
Sleep deprivation–and its effects, particularly on teens–has become a crucial area of research in the medical field. Sleep deprivation is the condition of not having enough sleep, which can be either acute or chronic. The majority of high school students are, to varying extents, affected by sleep deprivation.
On average, teens need about nine hours of sleep a night to best function, a fact that has been confirmed in studies conducted by the Mayo Clinic. While sleep deprivation may not seem like a big deal, in reality it can be quite detrimental. Sleep deprivation can lead to difficulty concentrating, learning, and staying awake in class. It can also be a contributing factor in behavioral problems. Studies have drawn direct connections between sleep deprivation and lower grades, moodiness, and depression. The consequences for driving while drowsy can be serious, especially with teen drivers. A 2004 study showed that from infancy to fifth grade the majority of students failed to get even the lowest recommended range of sleep. Another study referenced in the New York Times showed that the average eighth grader sleeps less than eight hours a night, and over a quarter of high school and college students are chronically sleep deprived. Researchers from Columbia University School of Nursing estimate that 15 million children in America are suffering from sleep deprivation.
In a survey of roughly 200 hundred students in the Upper School At Friends Academy, 89% of students reported that they get, on average, less than 8 hours of sleep a night. Shockingly, 27% of the students admitted to getting under 6 hours of sleep. Though parents and even researchers often assume that students spend a lot of time procrastinating their homework, this idea fails to correlate with the data at Friends. Only 3% of students admitted to spending more than two hours on social media a night. The majority of students spend less than an hour on social media per evening. A possible contributing factor to our national sleep deprivation crisis could be the following: 24% of Friends students reported spending more than four hours on homework a night. A striking 50% admitted to spending three to four hours a night on homework. And 85% of students reported that commitment to after school activities limits their time to get work done.
Clearly, sleep deprivation is a problem in our community, and it needs to be addressed. |
As an object moves through a gas, the gas molecules are deflected
around the object. If the speed of the object is much less than the
speed of sound
of the gas, the density of the gas remains constant and the flow of
gas can be described by conserving
in the flow.
speed of the object increases towards the speed of sound, we
on the gas. The density of the gas varies locally as the gas is
compressed by the object.
Near and beyond
the speed of sound (about 330 m/s or 700
mph on Earth at sea level),
small disturbances in the flow are transmitted
to other locations
isentropically (with constant entropy)
as sound waves.
supersonic and hypersonic
flows, small disturbances are transmitted
downstream within a cone.
The edge of the cone is depicted two-dimensionally by the blue lines
on the figure at the top of this page.
The sound waves strike the edge of the cone at a right angle and the
speed of the sound wave is denoted by the letter a.
The flow is moving at velocity v which is greater than a.
of the cone angle mu is
equal to the ratio of a and v:
sin(mu) = a / v
But the ratio of v to a is the
of the flow.
M = v / a
With a little algebra, we can determine that the cone angle mu
is equal to the
inverse sin of one over the Mach number.
sin(mu) = 1 / M
mu = asin(1 / M)
where asin is the trigonometric
inverse sine function.
It is also written as shown
on the slide sin^-1.
Mu is an angle which depends only on the Mach number
and is therefore called the
Mach angle of the flow.
We are interested in determining the Mach angle because small
disturbances in a supersonic flow are confined to the cone formed
by the Mach angle.
There is no upstream influence in a supersonic flow; disturbances
are only transmitted downstream within the cone.
Here's a Java program which solves for the Mach angle .
Due to IT
security concerns, many users are currently experiencing problems running NASA Glenn
educational applets. There are
security settings that you can adjust that may correct
You select an input variable by using the choice button labeled Input
Variable. Next to the selection, you then type in the value
of the selected variable. When you hit the red COMPUTE button,
the output values change.
The default input variable is the Mach number, and by varying Mach number
you can see the effect on Mach angle. You can also select
Mach angle as an input, and see its effect on the other flow variables.
If you are an experienced user of this calculator, you can use a
of the program which loads faster on your computer and does not include these instructions.
You can also download your own copy of the program to run off-line by clicking on the yellow button.
Look for the Isentropic Flow Calculator. |
The Renaissance, or rebirth, was a revival movement which took place in Medieval Europe between the fourteenth and sixteenth centuries. Rome became a dominant force in this movement, partly because the Renaissance originated in Italy. All around the Renaissance artists and intellectuals were visible reminders of Rome's famous past: from public buildings and aqueducts to monuments and roads. Moreover, the fall of Constantinople in 1453 caused a number of great intellectuals to flock to Italy, bringing with them scores of books written by Ancient Romans, which had been previously lost to the Italians. The style and content of these great works, by men like Cicero and Julius Caesar, were like a light in the medieval darkness. This Classical Latin was also very appealing to Renaissance men and seemed so much more refined and pure than the language spoken by the Medieval Church.
The growth of humanism in the Renaissance also contributed to boosting the importance of Rome. In essence, humanists wanted to learn from the past and looked to Ancient Rome for guidance. Humanists wanted to understand why the Roman Republic had failed, why the Romans were pagans and how these experiences could help them to understand the future. As humanists were often drawn from the upper classes of Italian society, they had the wealth needed to fund this interest in Rome and to ensure that the city's heritage would not be forgotten. |
No ecosystem is entirely free of sediment. In aquatic environments, its presence can threaten the health of ecosystems. Sediment can cloud the water, which in turn can negatively impact the plants and animals of these places. Also, depending upon the type of sediment, additional issues can also occur. It can have both organic and inorganic sources, whether it is algae floating in the water or suspended particles of soil from an eroded shoreline.
One of the primary negative effects of sediment in the ecosystem concerns the nature of the sediment. Agricultural and urban runoff may contain toxic materials, which can damage or even kill the organisms within an ecosystem. According to the U.S. Environmental Protection Agency (EPA), runoff from farmlands is the main cause of pollution in U.S. waterways. The runoff can include sediment from pesticide and fertilizer applications as well as animal waste and bacteria.
Some animal species are especially sensitive to the effects of sediment, with contamination quickly accumulating in animal tissues. Filter feeders such as mussels and clams get food by filtering water through their bodies, making them especially vulnerable to the presence of sediment. Other species such as salmon require clear waters in order to locate their prey. High levels of suspended sediment can interfere with their ability to find food, risking the health of the ecosystem by disrupting the prey-predator relationships.
Wetlands and Water Filtering
Wetlands affect the sediment load in the ecosystem by slowing water flow, which allows suspended particles to drop down to ground level. This filtering action is an important environmental benefit because it removes the sediment from the water. In essence, the sediment, whether it contains contaminants or not, becomes locked into the sediment layer of the wetlands. The effects of the pollutants are then mitigated.
One way in which sediment enters an ecosystem is through soil erosion. Water flowing over bare soils will easily dislodge sediment, where it will later be deposited within the environment. Impervious surfaces, such as roads and parking lots, facilitate soil erosion. Without plants to slow it, water flow increases, allowing it to dig deeply into stream banks.
The best way to control the negative environmental effects of sediment is to prevent its introduction into the environment. Planting dense groundcover along stream banks and coastal areas will help keep soils intact and prevent them from washing away. Restoration of wetlands within floodplains and other areas will improve water quality by removing suspended sediment from the water. |
can help pupils extend their spatial understanding related to number sense. It can be used to acquaint pupils with the attributes of the cuisenaire rods.
A time to play with the rods if pupils are not used to using them would be essential. If you do not have access to the rods then pupils could have some time with the general cuisenaire environment to be found here.
The challenge could begin my working on the pink rod ideas altogether and having some clear discussion as to why the two examples
shown lower down although using the same rods are counted as different.
The pupils can then work indiviually or in groups to tackle the other questions.
Do you think there are any more to find?
Are any of yours the same? (Good to ask both when there is
and is not
a slip-up in their examples)
Tell me about how you found these.
Suggest other different coloured pairs of rods that could be tested to see if they can be put together to equal the largest of the rods.
Two bigger rods can be put together for a much longer length for the pupils to try to work on using pairs of different rods (see here below)
Some pupils who are using the rods and have problems with fine motor skills made need to have someone to arrange the rods as they require. |
Shelling the ridge, seen from a French trench, 1915. Private collection. All rights reserved.
A year before the Battle of Verdun began, bloody fighting took place some twenty kilometres to the south-east of the town at Les Éparges, a ridge less than 2 kilometres in length in the Meuse Hills that was the scene of the first French attacks. The battle was violent but limited in terms of time and space. It was, though, an indication of what was to come, as the weaponry became more effective and deadly. Among the soldiers experiencing the horrors of this implacable struggle was Second Lieutenant Maurice Genevoix, the future President and Founder of the Comité National du Souvenir de Verdun (national remembrance committee for Verdun), who described the terrible realities of war in his journals. Genevoix became a member of the Académie française and published a masterly collection of memoirs consisting of five works written between 1916 and 1923. It was called, “Ceux de 14” (The Men of 1914).
The Learning Resource Centre at the Verdun Memorial Museum offers teachers and pupils a chance to visit Les Éparges and learn more about the battle. The three educational trails are an excellent, comprehensive introduction to the study of the First World War in school curricula. |
The Freedom Riders were civil rights activists who rode interstate buses into the segregated southern United States to test the United States Supreme Court decisions Boynton v. Virginia (1960) and Irene Morgan v. Commonwealth of Virginia (1946). The first Freedom Ride left Washington, D.C., on May 4, 1961, and was scheduled to arrive in New Orleans on May 17. www.pbs.org/
Why is this white man relevant to Black history? He was one of the Freedom Riders. In 1960, he was on a bus full of Freedom Riders who arrived in Montgomery Alabama where an angry white mob was waiting for them. He volunteered to get off the bus first and take the brunt of the mob's violence, which left him beaten and bloody. His name was James Zwerg.
Her name is Winonah Myers and she was a white student at the historically black Central State University in Wilberforce, Ohio. Arrested for being a Freedom Rider, she stayed in Parchman for her full 6-month sentence, the only Freedom Rider to serve a full term. "I felt there should be a little historical footnote that for sitting next to a friend on the (bus), this was the punishment meted out," she added. "I didn't think it would be recorded if no one had done the time." said Myers, 69 |
Ideas and Content
This segment is about the ideas behind art and writing. Artists and writers share their strategies for identifying and developing ideas for their work.
Time: 45-50 minute period
- Students will create a list of ideas and topics for future pieces of writing and art.
- Students will use two techniques to develop topical ideas for writing and art.
- Students will construct answers to questions and convey their ideas about a specific work of art.
Common Core State Standards English Language Arts:
- L.CCR.6 Acquire and use accurately a range of general academic and domain-specific words and phrases sufficient for reading, writing, speaking, and listening at the college and career readiness level; demonstrate independence in gathering vocabulary knowledge when considering a word or phrase important to comprehension or expression
- SL.CCR.2 Integrate and evaluate information presented in diverse media and formats, including visually, quantitatively, and orally.
- W.CCR.4 Produce clear and coherent writing in which the development, organization, and style are appropriate to task, purpose, and audience.
- W.CCR.6 Use technology, including the Internet, to produce and publish writing and to interact and collaborate with others.
Maine Learning Results Visual and Performing Arts Standards:
- A3 Media, Tools, Techniques, and Processes
- E2 The Arts and Other Disciplines
Prior to viewing:
Students, independently, spend no more than one minute writing their answers to, “Where do you get your ideas for art pieces, poems, songs and/or stories that you have created?”
Tell students that the segment will include writers and artists sharing answers to where they get their ideas for their work. As they view the segment, students should add to their brainstorming list additional “places” to get ideas.
Pause the segment after Fateh Azzam states, “…had to be written down in Arabic before I lost what I was trying to say.”
Tell students that they will be provided with an opportunity to try out some of the techniques used by the artists. Remind them that playwright Fateh Azzam and songwriter Judd Caswell both get ideas from what they know about or have experienced. Caswell said, “Write what you know.” Azzam said, “You have a narrative inside you that needs to come out.” Give students three to five minutes to add ideas to their personal brainstorming list that come from events in their life and topics they feel they know a lot about.
Focused Listing: Next, ask students to consider the word, “art.” Tell them to use an online dictionary and thesaurus to generate as many words as they can that they associate with the word (a strategy discussed by artist Amy Stacey Curtis in the segment). Give students three to five minutes to collect their words.
Tell students that the last part of the segment describes another way to create ideas. Show the remaining part of the segment, beginning where the Narrator states, “Some artists like to work with other artists to get their ideas or to expand ideas they have.”
Divide students into small groups and post Michael Reidy’s statement, “Collaboration is a great opportunity to discover what you don’t know.” Ask students to share their list of words associated with “art.” Tell students to add new ideas from others to their own list. Next, as a group, ask them to select some of the words on their lists to create a phrase that might work as an opening line in a poem about the subject.
Writing Through Art Activity:
Display the image(s) students used to construct paragraphs in the previous segment. Ask them to share their ideas about the image. Students could add their responses to their collection of paragraphs:
Do you like it? Why do you like/not like it?
What does it make you think about?
What emotions do you feel when you view it?
Who do you know who would enjoy the image?
Would you want to hang this image in your house or would you prefer to visit it in a museum? Why?
Note: If students created a Venn Diagram to compare and contrast art and writing in the first segment, revisit the diagram and make any necessary additions or corrections based on this segment.
Have students use the phrase they developed about art and write a poem.
Compile students’ brainstorming lists about where they get ideas and create a word cloud to display in the classroom or online for future writing assignments.
Link to teacher inspired lessons: |
With the passage of the FAIR Education Act, we would like to offer some resources, lesson plans, and ideas for incorporating these standards into the classroom.
- FAIR Education Act Website. Offers lessons based on grade level and links to additional resources.
- The Role of Gay Men and Women in the Civil Rights Movement. This series of four lessons highlights contributions to the Civil Rights Movement through the lens of important figures — James Baldwin, Lorraine Hansberry, Pauli Murray, and Bayard Rustin. Additional lessons for a variety of grade levels.
- Unheard Voices: Stories of LGBT History. This lesson easily weaves itself into a variety units. Looks at LGBT History from multiple perspectives and eras. The lesson opens with an excerpt from Ralph Ellison’s Invisible Man and helps teachers engage with Common Core Standards. View individual aspects of the lesson here.
- History of the Disability Rights Movement. This lesson is part of the “Equal Treatment, Equal Access: Raising Awareness about People with Disabilities and Their Struggle for Equal Rights” curriculum. Also available are lesson plans for elementary school, such as Experiencing Hearing Disability Through Music. Additional lesson plans from the ADL highlight racism, gender, and a host of other issues.
- The LGBT Pride Parade. A themed collection from Calisphere that chronicles the San Francisco Pride Parade. Engaging and unique images.
- Milestones in the American Gay Rights Movement. This timeline includes links to primary resources.
- DOMA and LGBT Commemorative Month. Short discussion of the Defense of Marriage Act (DOMA) from the Library of Congress blog.
- Disability History Museum. Searchable collection of images and documents relating to disability history.
- A World of Difference Institute: Recommended Multicultural & Anti-bias Books for Children. An annotated bibliography of grade level appropriate fiction books that address issues of Prejudice & Discrimination. Also available are lists concerning religion, customs, traditions, and more. |
Discrete Mathematics for Computer Science/Proof
A proof is a sequence of logical deductions, based on accepted assumptions and previously proven statements and verifying that a statement is true. What constitutes a proof may vary, depending on the field.
In mathematics, a formal proof of a proposition is a chain of logical deductions leading to the proposition from a base set of axioms. We'll be discussing propositions, logical deductions and axioms.
Definition: A statement that is either true or false.
Propositions: True or false?
One must be cautious in assuming that propositions which apply to an infinite set of elements can be checked using a finite subset. For example:
- a4 + b4 + c4 = d4 has no solution when a, b, c and d are positive integers.
- This equation was proven false with values of a, b, c and d in the tens and hundreds of thousands.
- The four-color theorem: Every map can be colored with four colors so that no two adjacent countries are assigned the same color.
- This theorem is true, but many incorrect proofs were accepted for long periods of time. (This theorem can be false, though, if a country is not contiguous (it has exclaves) but must all be colored uniformly, but this is a complicated case.)
- The Goldbach Conjecture: Every even integer greater than 2 is the sum of two primes.
- No one knows if this conjecture is true or false!
The Axiomatic Method
- The axiomatic method is the method of forming proofs based on axioms and previously-proven statements.
- Axiom: a proposition that is simply accepted as true.
- Euclid pioneered the axiomatic method in his geometric proofs. Euclid's proofs were based on five fundamental axioms, such as the axiom that one and only straight line segment can be drawn between each pair of points. See this page for a more specifically geometric concept of proofs.
- Euclid's axiomatic method has become the foundation of modern mathematics! Today, the ZFC axioms are the handful of axioms from which modern mathematics is derived.
- There are several common terms for a proposition which has been proven true:
- Theorem: A particularly important proposition.
- Lemma: A preliminary proposition useful for proving later propositions.
- Corollary: An afterthought, a proposition that follows in just a few short steps from a theorem.
- And again, the general term for proof (which you now understand somewhat better):
- A sequence of logical deductions from axioms and previously-proven statements, which concludes with the proposition in question.
Proving an Implication
Many claims in mathematics are formulated as: "P implies Q." These are called implications. This section will teach you the format of writing a proof, and walk you through some example proofs. There are a few standard methods for proving an implication, and a couple of points that apply to all proofs.
- You'll often need to do some scratchwork while you're trying to figure out the logical steps of a proof. Your scratchwork can be as disorganized as you like--full of dead ends, strange diagrams, obscene words, whatever. But keep your scratchwork separate from your final proof, which should be clear and concise.
- Proofs typically begin with the word "Proof," and end with some sort of indication like Q.E.D. This clarifies when the proofs begin and end.
Now we will go over some of the basic methods of proving an implication.
1. Write: "Assume P." Show that Q logically follows.
- Example: Prove that if 0 ≤ x ≤ 2, then -x3 + 4x + 1 > 0.
- Solution: Before we write a proof of this theorem, we need to do some scratchwork to figure out why it is true. The inequality certainly holds for x = 0; there, the left side is equal to 1 and 1 > 0. As x grows, the 4x term (which is positive) initially seems to have greater magnitude than -x3 (which is negative). For example, when x = 1, we have 4x = 4, but -x3 = -1 only. In fact, it looks like -x3 doesn't begin to dominate until x > 2. So it seems the -x3 + 4x part should be nonnegative for all x between 0 and 2, which would imply that -x3 + 4x + 1 is positive.
So far, so good. But we still have to replace all those "seems like" phrases with solid, logical arguments. We can get a better handle on the -x3 + 4x part by factoring it, without too much difficulty, into x(2 - x)(2 + x).
Aha! For x between 0 and 2, all terms on the right side are nonnegative. And a product of nonnegative terms is also nonnegative. We can organize these observations into a clean proof.
- Proof: Assume 0 ≤ x ≤ 2. Then x, 2 - x and 2 + x are all nonnegative. Therefore, the product of these terms is also nonnegative. Adding 1 to this product gives a positive number, so:
Multiplying on the left side proves that:
2. Prove the contrapositive.
- An implication "P implies Q" is logically equivalent to its contrapositive, "~Q implies ~P." Often, proving the contrapositive is easier than proving the original statement. In order to prove the contrapositive, one must write "We prove the contrapositive," state the contrapositive, and then proceed.
- Example: Prove that if r is irrational, then √(r) is also irrational.
- Solution: Recall that, unlike irrational numbers, rational numbers are equal to a ratio of integers. We would need to show that if r is not a ratio of integers, then √(r) is also not a ratio of integers. It is simpler to prove the contrapositive: If √(r) is rational, then r is also rational.
- Proof: We prove the contrapositive: if √(r) is rational, then r is also rational.
Assume that √(r) is rational. Then there exist integers a and b such that:
Squaring both sides gives:
Since a² and b² are both integers, r is also rational.
Proving an "If and Only If" Statement
Many mathematical theorems assert that two statements are logically equivalent--that is, one holds if and only if the other does. |
The meaning of a word can often be gleaned from clues in the surrounding context. What comes before and after a new word can reveal its meaning, structure, and use. These strategies can help students meet Common Core Standards related to Vocabulary Acquisition and Use. Help struggling readers to draw upon multiple approaches, such as direct instruction, choice of reading materials, or varied ways to interact with new words, using a range of technology tools.
Click here for a version of this slideshow which can be used with a screen-reader and is 508 Compliant.
Students need to know how to find context clues embedded in text, how to use them to understand word meanings, and why they are important. Use technology tools with a variety of differentiated strategies to help students use context clues. |
Our fourth grade artists had a COLOR THEORY lesson on the complementary colors. COMPLEMENTARY COLORS are directly opposite each other on the color wheel. These opposite colors create CONTRAST in an artwork. If some of a colors complement is added to it, the INTENSITY of that color will be duller.
The artists began by folding their paper, opening it, and drawing half of the outline of their creatures face on one side of the fold. Next they folded the paper again and cut the shape out to create a full face. To make the 3D nose, they either extended their drawing of the face and brought it down to tape it or made it separately.
They continued by drawing the details of their creatures face with pencil and TRANSFERRED the features they chose to make SYMMETRICAL. The last step was to color their creatures with a complementary color scheme. We had an exciting time with this lesson! |
Robert Fulton, best known for his work in steamboat technology, was born in Little Britain, Pennsylvania, in 1765. As a child, Fulton enjoyed building mechanical devices, taking on such projects as rockets and a hand-propelled paddle wheel boat. His interest turned to art as he matured, and by the age of seventeen, Fulton was supporting himself through his sales of portraits and technical drawings. In 1786, Fulton left the United States to study painting inEngland. Although he managed some success, the general response his work received was disappointing and convinced him to concentrate on his engineering skills.
The first project to capture his attention revealed his emerging interest inwater transportation. His assignment involved designing a canal system to replace the locks that were then in use. After several years of work, Fulton came up with a double inclined plane system for which he was granted a British patent in 1794. His creative ideas continued to flow as he developed a plan for cast iron aqueducts and invented a digging machine; in 1796, he published asummary of his ideas on improving canal navigation in his Treatise on theImprovement of Canal Navigation.
In 1797 further research on canals took Fulton to Paris, France. While he wasthere he became fascinated with the notion of a "plunging boat," or submarine, and began designing one based on the ideas of American inventor David Bushnell. Fulton approached the French government, then at war with England, withthe suggestion that his submarine could be used to place powder mines on thebottom of British warships. After some persuasion, the French agreed to fundthe development of the boats and, in 1800, Fulton launched the first submarine, the Nautilus, at Rouen.
The 24 1/2 foot (7.5 m) long, oval-shaped vessel sailed above the water likea normal ship, but the mast and sail could be laid flat against the deck whenthe craft was submerged to a depth of twenty-five feet by filling its hollowmetal keel with water. Fulton's plan was to hammer a spike from the metal conning tower into the bottom of a targeted ship. A time-released mine attachedto the spike was designed to explode once the submarine was out of range. Although the system worked in the trials, British warships were much faster than the sloop used in the experiments and thus managed to elude the slower submarine. The French stopped funding the project after the failed battle attempt, but the British, who considered the technology promising, brought Fulton over to their side. Unfortunately, once again the submarine worked well in tests, but proved unsatisfactory on practical situations. After its failure in the Battle of Trafalgar (1805), the British too abandoned the project.
After these experiences, the undaunted Fulton turned to a new area of exploration--steam. Correspondence indicates that he had been aware of work on the movement of ships by steam power since at least 1793. Through his contacts inParis, Fulton met Robert Livingston (1746-1813), the American foreign minister to France who also owned a twenty-year monopoly on steam navigation in NewYork State. Fulton shared some of his ideas about steam power with Livingstonand, in 1802, the two decided to form a business partnership. The followingyear, they launched a steamboat on the Seine river that was based on the design of fellow American John Fitch. The vessel traveled at a speed of three miles per hour and, although some adjustments were necessary to make the craft sufficiently seaworthy, it was clear that the basic technology worked well.
Fulton returned to New York later in 1803 to continue developing his designs,conscious of the fact that his partner's monopoly was contingent on their development of a boat that could travel at least four miles per hour. After four years of work, Fulton launched the Clermont, a steam-powered vesselwith a speed of nearly five miles per hour. The partnership between Fulton and Livingston thrived, and Fulton had at last achieved a recognized success. During the ensuing years, Fulton designed thirteen more steamboats, includingthe Demologus, a warship; and he established an engine works in New Jersey that produced steam engines.
Fulton died on February 24, 1814. His persistence and belief in his ideas helped steamboats become a major source of transportation on the rivers in the United States, and resulted in a significant reduction of domestic shipping costs. |
The Earth, Moon, and Sun
By James Kregenow
Textbook Key Concepts:
How does Earth move in Space?
Earth moves through space in two ways: Rotation and Revolution. The spinning of Earth on its axis is called rotation. Earth completes one full rotation about every 24 hours. It is day on the side of Earth that is facing the sun, but because it rotates every day, half the day is day, and the other half is night. Revolution is the movement of on object around another. Earth revolves around the Sun, and as it does so, it follows a path, or an orbit. Earth's orbit is not a perfect circle, it is elongated, or an ellipse. Earth takes roughly one year to complete a revolution around the sun.
What causes the cycle of seasons on Earth?
Near the equator, the Sun's rays are very focused and direct year round. But in the North and South Poles, the rays are very spread out, causing very little heat energy, causing year round winters.
The seasons are caused because of the tilt of Earth's axis. In December, the North pole is tilted away from the Sun. In June, the North pole is tilted towards the Sun. In March and September, neither hemisphere is tilted towards, or away from the Sun. This happens because when Earth revolves around the Sun, the axis isn't straight up and down, it is always leaning.
A solstice is a day when the Sun is farthest North or South of the equator. An equinox is when neither hemisphere is tilted towards the Sun. Each happen twice a year: December solstice, or winter solstice (shortest day in northern hemisphere), June solstice, or summer solstice (longest day in northern hemisphere), March equinox, or Spring equinox (Days and nights are 12 hours each), and September equinox, or autumn equinox (Days and nights are 12 hours each).
What determines the strength of the force of gravity between two objects?
A force is a push or a pull, and gravity is the attractive force between objects. Newton's Law of Universal Gravitation states that every object in the universe attracts another object. Mass is the amount of matter in an object. Gravity's strength depends on the objects' masses, and the distance between them. The bigger the objects are, the higher the force of gravity, and the smaller they are, the smaller the force. The smaller the distance, the higher the force, and larger the distance, the smaller the force. If the mass of the objects are small and the distance is large, there will be a very small force of gravity. If the mass of the objects are large and the distance is small, there will be a very large force of gravity.
What two factors combine to keep the Moon and Earth in orbit?
Inertia is what keeps the Earth and Moon in orbit. Inertia is the tendency of an object to resist change in motion. Newton's first law of motion says an object at rest will stay at rest and an object in motion will stay in motion with a constant speed and direction unless acted by a force. In an orbit, there is a force, and that is gravity. There is a perfect balance between speed and gravity when an object is in orbit. If there wasn't, the object would either come closer to the object it is orbiting, or it would move in a straighter line further away, still orbiting. In both cases, though, it will keep its orbit unless it is abruptly disturbed.
What causes the phases of the moon?
The different shapes of the Moon you see from Earth are called phases. Phases are caused by the changing in relative position of the Moon, Earth, and the Sun. There are eight major phases of the moon: New Moon, Waxing Crescent, First Quarter, Waxing Gibbous, Full Moon, Waning Gibbous, Third Quarter, and Waning Crescent.
Waxing means growing and Waning means shrinking. The phases we see all depend on our perspective of the moon and the position of the Earth and the Moon. The Lunar Calender is dependent on the phases of the Moon. One Lunar Month is how long it takes to complete one full cycle of phases, New Moon to New Moon. One Lunar Month is almost exactly one month in the calender we use now.
What are Solar and Lunar eclipses?
An eclipse is a partial or total blocking of one object in space by another. The two types that we can see on Earth are Solar and Lunar eclipses. The word Solar comes from the word Sun, and Lunar comes from Moon, so there are Sun eclipses, and Moon eclipses. Eclipses are caused the same way as the phases of the Moon, but eclipses happen a lot less, and are rare to see. The Sun, Moon, and Earth have to line up perfectly in order for one to happen. Solar eclipses are when the Moon blocks the Sun, and The Moon casts a giant shadow on Earth. Lunar eclipses happen when it is a full Moon and Earth gets directly in between the Sun and Moon, causing Earth to cast a giant shadow on the Moon.
What causes the tides?
Tides are caused by differences in how much the Moon's gravity pulls on different parts of the Earth. Tides are the rise and fall of ocean water that occurs every 12.5 hours or so. High tides occur because the moon's gravity pulls the ocean water closer to the moon, rising the water level. On the other side of the Earth, Another high tide occurs. This happens because the Moon's gravity is also pulling the Earth itself, and it leaves behind some of the water, raising the water level. In between the two high tides on each side of the Earth, there is a low tide, where the water level is lower than average. This happens because the high tides need to get the water from somewhere to raise their water level.
What features are found on the Moon's surface?
Features on the moon's surface include maria, craters, and highlands. Maria are the dark, flat areas on the moon. Maria is the Latin word for seas because Galileo incorrectly though that the maria were oceans, but they are actually hardened rock formed from huge lava flows billions of years ago. Craters are the large round pits that are on the moon. Some of them can be hundreds of kilometers wide. For a long time, scientists thought that they were formed by volcanoes, but we now know that they were caused by the impacts of meteoroids, which are chunks of rock from space. Highlands are the lightly colored portions that are easiest to see from Earth. Galileo correctly inferred that the lightly colored portions of the moon are highlands or mountains because the rims of craters, highlands, and mountains cast large shadows on the low parts of the moon, which he could see.
What are some characteristics of the Moon?
The Moon is dry and airless. Compared to Earth, the Moon is small and large variations in its surface temperature. There is no atmosphere on the Moon, so there is no oxygen to breath, and no protection from the Sun. You would need a bulky space suit that provides oxygen, and Sun block so you don't get burned. The Moon has a diameter of 3,476 kilometers, a little less than the distance across the United States. This is about a quarter of Earth's diameter. The Moon has about one-eightieth the mass as Earth. The temperature range is from 130 degrees c in the day to -180 degrees c at night because there is no atmosphere. The Moon has no liquid water, but there is evidence there might be large patches of ice near the Moon's poles. Also, there might be large ice patches in valleys in the shadow of a mountain. But even if there is water on the Moon, there would be barely enough to support a colony from Earth.
How did the Moon form?
Scientists Theorize that a planet - sized object collided with Earth to form the Moon. About 4.5 Billion years ago, The Solar System was full of rocky debris, some of them the size of planets. Material from the object and Earth's outer layers combined because of gravity, and went into orbit forming the Moon.
Moon Shoes For sale!!!!
Now you can get Moon Shoes and jump as high as you want for the low price of seven payments of $19.99!!!
Moon shoes Party!!
Bring your Moon Shoes that you just got for an exceptional price (provided that they haven't broke yet) and party in them and jump about half as high as you normally would without them on! |
Competence and jurisdiction, in law, the authority of a court to deal with specific matters. Competence refers to the legal “ability” of a court to exert jurisdiction over a person or a “thing” (property) that is the subject of a suit. Jurisdiction, that which a competent court may exert, is the power to hear and determine a suit in court. Jurisdiction also may be defined as an authority conferred upon a court (thus making it competent) to hear and determine cases and causes. Jurisdictional authority is constitutionally determined.
Examples of judicial jurisdiction include appellate jurisdiction, in which a superior tribunal is invested with the legal power to correct, if it so decides, legal errors made in a lower court; concurrent jurisdiction, in which jurisdiction may be exercised by two or more courts over the same matter, within the same area, and at such time as the suit might be brought to either court for original determination; and original jurisdiction, in which the court holds the first trial in a matter.
As a court also may be vested with the authority to handle matters within a certain territory, geographic distinctions are important, especially in cases when a court must decide whether opposing parties have a sufficient relationship with the geographic area in which the court has jurisdiction (in which it is competent to hear and determine the case). For example, if a court has appellate jurisdiction, the case must have passed through the necessary preliminary stages before being eligible for consideration by that court.
In the United States, jurisdiction is largely personal. If a defendant, either a person or a corporation (a legal person), can be served with a subpoena to appear, the court may become involved in the case. In common-law countries, if personal jurisdiction is impossible to achieve, then jurisdiction may be based on the ownership of property. In such cases only a person’s property rights are involved, not his individual liberties.
In civil-law systems jurisdiction varies: in France the courts will enter a case if at least one party is a French national; in Italy some Italian link must be shown by a nonnational for jurisdiction to be exercised; and in Germany and Austria, by contrast, the location of property often determines jurisdiction. |
Welcome to our final installment of The World Through Sound. Last time, we learned about linearity, non-linearity, and how linearization allows scientists to treat complicated systems like much simpler analogs through approximation. In this article, we will explore the concept of acoustic absorption and how sound can take many forms, including pressure, flow, and temperature.
Sounds don’t last forever. In an open environment (like the outdoors) a sound will spread over a growing area from its starting point, the energy spreading out, causing the sound to get quieter and quieter until it finally dies away. Similarly, in an enclosed room the sound will also spread, but only until it fills the room. In both of these cases, though, there is an effect beyond the spreading of acoustic energy that causes sound to grow quieter: acoustic absorption. Acoustic absorption describes pretty much any process that causes sound energy to be dissipated, and intelligent use of acoustic absorption is how we keep concert halls from turning into echo chambers or ensure that speech in offices and classrooms is intelligible.
What part of a room is responsible for most of the absorption though? It might be natural to assume that air absorbs a lot of the sound since that’s where the sound spends most of its time. But because air is a reasonably good acoustic medium, sound can move through it with only a little bit of loss. These losses only become noticeable at very long distances such as in outdoor acoustics.
But what about the walls and other surfaces of an enclosed space? With lots of jutting angles to break up the waves, you might expect that good solid walls and objects help to bring down the noise. Again, though, solid walls are good reflectors of sound, acting much like how a mirror would for light. While walls are good for redirecting sound, they aren’t that great for getting rid of it. With only solid walls and air for absorption, sound can reverberate for a startlingly long time.
Certain materials are very good for absorbing sound. Any sort of cloth or porous material, for example, will generally convert sound energy into heat. One way to understand this is to think about how sound causes air to move. While sound is generally thought of as a pressure wave, there is also air motion associated with the sound. The regions of high and low pressure are caused by sound flowing into and out of different regions, bunching up in some areas and thinning out in others. The equation a scientist would use here to quantify this relationship involves specific acoustic impedance, which gives a relationship between pressure and air flow, and the strength of that connection depends on air density and sound speed. Naturally, any porous material that restricts the flow of air is going to prevent that motion, and thus reduce the energy in the sound wave.
At least, that’s the way the explanation usually goes. There’s a bit of a wrinkle to this explanation, albeit one that requires a bit of background knowledge about how sound moves in a room. For many of these absorbing materials that impede air flow, like carpets and wall tiles, the absorbers are on or near a solid surface. It turns out that because those surfaces reflect sound, there’s not actually a lot of flow in those areas. Instead, there’s a trade-off so that the flow is low but the pressure is high. But if this is the case, then why are carpets and acoustic tiles still effective absorbers? Shouldn’t the low flow prevent them from being useful?
To find the answer, we must first consider a rather surprising connection between sound and temperature. As we have previously discussed, sound is a pressure wave. But, as you may remember from physics class, a gas under pressure will increase in temperature. The equation that should come to mind, here, is the ideal gas law that directly relates pressure and temperature. As a result, where the pressure of a sound wave increases, so does the temperature. I know that I was surprised to learn this fact, and I was even more surprised to learn of an entire branch of acoustics devoted to using sound to manipulate heat, called thermoacoustics, that has led to successful acoustic refrigerators and even sound lasers!
So what does temperature have to do with sound absorption? A lot, it turns out. Just like how materials that impede flow are good at dissipating the acoustic energy associated with flow, materials that insulate temperature are great for dissipating the acoustic energy of temperature. This is why hiding under a blanket is good for dulling sound, why fiberglass insulation helps keep out unwanted noise, and why carpeted rooms are so much quieter than rooms with tile. Even better, because reflecting surfaces cause the pressure (and therefore temperature change) to peak near a wall or floor, thermally insulating acoustic tiles and carpets are perfectly positioned for maximum effect in those locations! The topic of controlling sound in a room will be highlighted in an article by Dr. Bonnie Schnitta in the fall 2016 issue of Acoustics Today, and techniques for managing reverberations in large spaces was covered by Russ Berger in *From Sports Arena to Sanctuary: Taming a Texas-sized Reverberation Time*.
Physics is full of equations that connect seemingly different values. There are equations like the ideal gas law that relates pressure and volume to temperature. There are equations of motion and so-called laws (like Newton’s law), that tell how objects move. In all of these equations, we draw connections between dissimilar quantities, but in the process we can see how those concepts are really related. This is the power of substitution, probably the most useful mathematical tool to which we have access. With substitution, science can make the leaps of insight that further our understanding of the world. Sound is made up of waves of pressure, but it’s also made up of waves of flow and waves of temperature change and waves of density. Depending on which angle we consider, different solutions present themselves.
This has been the final installment in this series of The World Through Sound. For those of you that are new to the world of acoustics, I hope that I have taught you something and shared just a touch of the enthusiasm I have for this branch of science. For those of you that are experienced acousticians, I hope that I have done our field justice and the perspectives that I’ve given might help you to share acoustics with those around you. And if you are hungry for more acoustics reading (or you came in to this series late) you can find more of The World Through Sound and lots of other articles on acoustics at AcousticsToday.org. If you want to see more popular acoustics from me, you can check out my blog over at ListenToThisNoise.com or follow me on twitter @ListenToNoise.
In closing, I would like to thank Acoustics Today and the Acoustics Today Advisory Board for this internship and the opportunities for outreach that it has given me. I would especially like to thank my editor Arthur Popper for his feedback and help throughout this series and webmaster Daniel Farrell for converting my articles into such an attractive online format. And, of course, I would like to thank all of my readers for following this series over the last year. It really has been a pleasure sharing my love of acoustics with all of you, and I truly hope that I have passed at least a little of that love on to you.
Andrew “Pi” Pyzdek is a PhD candidate in the Penn State Graduate Program in Acoustics. Andrew’s research interests include array signal processing and underwater acoustics, with a focus on sparse sensor arrays and the coprime array geometry. Andrew also volunteers his time doing acoustics outreach and education as a panelist and moderator on the popular AskScience subreddit and by curating interesting acoustics news for a general audience at ListenToThisNoise.com.
Contact info: email@example.com |
The Croatian language has a long history – it is considered to have been born in the 9th Century, with the establishment of Old Church Slavonic as the official liturgy language. The first written documents in Croatian can date back to the 11th Century, the most important of them being the Baška tablet, which is written in an old dialect of Croatian.
Though derived from the Church Slavonic, the vernacular quickly gained its own path through world linguistic history. Vernacular Croatian literature had its own development long after the Old Church Slavonic had lost its influence.
In more recent times, in the 19th Century, representatives of Croatian, Serbian and Slovenian political and linguistic authorities decided to form a linguistic union and, later on, a united kingdom. The three countries were, in fact, quite close as to language, ethnic peculiarities and geography. The kingdom didn’t survive long, but was followed by the establishment of the Republic of Yugoslavia, which marked significantly the development of the Croatian language. As Serbian was the strongest, based on the biggest groups of native speakers, it naturally had the most influential impact upon the joint evolution of a unified language.
After the announcement of the independence of Croatia in 1991, events were launched in motion in favor of a purer Croatian. Teams of linguists and experts established reforms, and regulatory bodies have been institutionalized in order to safeguard the independent development of the Croatian language. Today, it is a language striving to re-gain its own route, distinctly separating itself from Serbian and other influences. |
It is commonplace that technologies are changing the world we live in. Roughly every two decades, Earth becomes almost a brand new place in terms of technological wonders becoming routine, and revolutionary ideas settling as solid scientific theories. However, we might not be aware of the extent to which the world is changing—not just because we constantly live within the eye of the hurricane, so to say, but also because technologies sometimes advance way too fast for us to comprehend and evaluate their influence on the ways we live, think, feel, and behave. One of such technologies is virtual reality, or VR: a concept introduced by science-fiction writers and scientists quite a while ago, but is now becoming a trending technology worldwide. And, since it is obvious that VR will from now on be an inalienable part of the world, it is important to contemplate the ways in which it will affect humanity.
If we analyze the term “virtual reality,” we naturally need to understand what each of its two components mean on their own. So, according to the online Merriam-Webster dictionary, “virtual” means “very close to being something without actually being it” (Merriam-Webster.com). Specifically, this adjective is mostly applied to the environments created with the help of computers (as in video games, for example). “Reality,” in its turn, is broadly speaking about the whole three-dimensional world we live in and interact with. Considering these two definitions, it can be said that virtual reality is an artificially-created three-dimensional environment constructed with the help of computers, which people can interact with in the same way as they do with the real world, using their senses to navigate and explore it.
Computer generation can emulate situations and environments hardly possible in the real present world: visual effects we can see in movies such as “Transformers” or “Avatar” make a solid example of this statement; no need to say that emulating regular real-life situations is also possible for VR. Therefore, VR can be used to enable people perform actions without affecting the real world, which is especially useful for all kinds of training and practices. For instance, a future jet pilot can safely learn how to maintain and fly their aircraft without putting their life at risk and wasting expensive fuel. An astronaut can practice their outer space repairing skills with the help of a computer simulation, completely emulating the conditions that he/she will face in actual space. A medical student can learn how to perform surgeries or autopsies on fully interactive body models, which can appropriately respond to a student’s every action—needless to say it is much safer for learning than when a novice surgeon performs their first operation on a real patient. Therefore, one of the most obvious effects VR will have on the modern world is the enhancement of studying and training capabilities, especially for the people involved in dangerous jobs.
VR can be great for entertainment, socialization, and art. In 2015, probably the biggest social media giant, Facebook, bought Oculus VR technology, integrating the possibilities it offers into the platform. For example, with Oculus Rift, it will soon be possible to view your friends’ photos in a 360-degree mode, which basically means you will be able to be virtually present on the events you missed, seeing them from a first-person perspective, and giving you a more immersive experience (AndroidPit). Gaming and movie industries will benefit from VR technology greatly, attracting millions of new customers annually—creating engaging and thrilling entertainment products capable of fully capturing consumers attention will benefit these industries and boost their further development.
Also, virtual reality can be a great help and relief for people with limited capabilities, especially for those who are fully or partially paralyzed, and thus have to live their lives chained to only one or few locations. It can give such people an opportunity to explore the world around them in the same way people without disabilities can do: the ability to walk, run, and perform other actions we take for granted can be a life savior for the majority of paralyzed patients, and this is probably the best opportunity VR can offer humanity at the moment (Engadget). This is not to mention that VR allows things impossible in real life, such as teleportation or Superman-like flying, all of which would also be available for such people.
As it can be seen, virtual reality is a new technology that can affect the lives of many people worldwide in a number of ways. Being an artificial, three-dimensional depiction of the real world, it grants numerous opportunities for practicing a wide range of skills that would otherwise imply great risks; jet pilots or medical students, for example, would definitely appreciate the possibilities VR grants. VR can significantly change the way people interact online, making their digital experiences for immersive and realistic—for example, Facebook has integrated Oculus technology, allowing 360-degree viewing of images, and this is just the beginning. The last but not the least is the fact that VR will allow disabled people to experience what they are deprived of: walking, running, exploring the world and travelling, and so on, so its value in these terms is difficult to underestimate.
- “Virtual.” Merriam-Webster. Merriam-Webster, n.d. Web. 20 Sept. 2016.
- Evans, Clare. “How Will Virtual Reality Change Our Lives?” Engadget. N.p., n.d. Web. 20 Sept. 2016.
- Schmidt, Cory. “Here’s How VR Could Change Our Social Lives.” AndroidPIT. N.p., n.d. Web. 20 Sept. 2016.
Sign up and we’ll send you ebook of 1254 samples like this for free!
- Thesis statement and compare contrast essay asked by anonymous
- Gender stereotypes persuasive essay asked by anonymous
- Which of the following would best work as the title of an explanatory essay? asked by anonymous
- Divergent Novel Thesis Statement asked by anonymous
- What is a good thesis statement against euthanasia asked by Anonymous |
Celsius is a familiar name to much of the world since it represents the mostwidely accepted scale of temperature. It is ironic that its inventor, AndersCelsius, the inventor of the Celsius scale, was primarily an astronomer and did not conceive of his temperature scale until shortly before his death.
The son of an astronomy professor and grandson of a mathematician, Celsius chose a life within academia. He studied at the University of Uppsala where hisfather taught, and in 1730 he, too, was given a professorship there. His earliest research concerned the aurora borealis (northern lights), and he was the first to suggest a connection between these lights and changes in the earth's magnetic field.
Celsius traveled for several years, including an expedition into Lapland withFrench astronomer Pierre-Louis Maupertuis (1698-1759) to measure a degree oflongitude. Upon his return he was appointed steward to Uppsala's new observatory. He began a series of observations using colored glass plates to recordthe magnitude of certain stars. This constituted the first attempt to measurethe intensity of starlight with a tool other than the human eye.
The work for which Celsius is best known is his creation of a hundred-point scale for temperature, although he was not the first to have done so since several hundred-point scales existed at that time. Celsius' unique and lasting contribution was the modification of assigning the freezing and boiling pointsof water as the constant temperatures at either end of the scale. When the Celsius scale debuted in 1747 it was the reverse of today's scale, with zero degrees being the boiling point of water and one hundred degrees being the freezing point. A year later the two constants were exchanged, creating the temperature scale we use today. Celsius originally called his scale centigrade (from the Latin for "hundred steps"), and for years it was simply referred to as the Swedish thermometer . In 1948 most of the world adopted the hundred-point scale, calling it the Celsius scale. |
These two worksheets help to guide students through the biography writing process. The first worksheet helps to organize their questions for their partner and record their answers. A whole-class discussion about biographies and the types of questions that authors ask before writing works really well before having students work on their own questions. The second worksheet is where students can take their questions and answers and turn it into a short biography about their partner. This is a great guided writing process to do before setting students loose on a more individual writing task, such as a biography about someone famous for Hispanic Heritage Month. |
England hoped to buy unrefined materials. Tobacco, indigo, ores, grain, wool, cotton, and sugar. These materials were abundant in the colonies, and highly sought after in England. In England they used tobacco to make cigars. Indigo was used to dye cloth blue. Ores were refined into metals for smithing. Grain was made into various foods, or made into liquors. Wool and Cotton were spun into thread, dyed, and woven into cloth. Sugar was used for various purposes too.
This system is called mercantilism. Colonies produced raw materials, like listed above, for relatively cheap prices due to supply and demand. They shipped these raw materials to homelands where there were factories and other processing facilities there. They made refined products, and shipped them back to the colonies. |
The CoRoT satellite has discovered a planet only twice as large as the Earth orbiting a star slightly smaller than the Sun.
It is the smallest extrasolar planet (planet outside our solar system) whose radius has ever been measured.
The planet’s composition is not yet certain, but it is probably made predominantly of rock and water. It orbits its host star in 20 hours, which is the shortest orbital period of all exoplanets found so far. Astronomers infer its temperature must be so high (over 1000 degrees C) that it should be covered in lava or superheated water vapour.
Dr Suzanne Aigrain of the University of Exeter’s School of Physics is a member of the CoRoT team and was involved with this study.
Most of the 330 or so exoplanets discovered so far are giant planets, primarily composed of gas, like Jupiter and Neptune. This new object, named CoRoT-Exo-7b, is very different. “Finding such a small planet wasn’t a complete surpriseâ€, says Dr Daniel Rouan, from LESIA in Paris, who announced the discovery today at a conference in Paris. Dr Alain Leger from the Institut d’Astrophysique de Marseille, leader of the discovery paper, explains: “It could be an example of a so-called ocean planet, whose existence was predicted some years ago: a Neptune-like planet, made of ice around a rocky core, drifts so close to its star, the ice the melting to form a fluid envelope.â€
Such a small planet such as this one is extremely difficult to detect. CoRoT-Exo-7b was found because it passes in front of its host star, causing the star to dim very slightly once per orbit – a so-called transit, which in this case is only 0.03% deep. “We were able to see it with CoRoT because it is in space, with no atmosphere to disturb the measurements or daylight to interrupt them.†explains Dr Roi Alonso, from the Laboratoire d’Astrophysique de Marseille.
The team then had to make sure they were not seeing one of many other kinds of objects that can mimic planetary transits, using complementary observations from the ground. This is particularly challenging in the case of such a small planet, as Dr Aigrain from the University of Exeter explains “We ruled out every mimic except for a very improbable, almost perfect chance alignment of three stars. All our data so far is consistent with the transits being caused by a planet of a few Earth masses, though more data are needed for a precise mass estimate.â€
The discovery of CoRoT-Exo-7b is being announced today at the CoRoT Symposium 2009 in Paris and will be the published in a forthcoming special issue of the journal Astronomy and Astrophysics dedicated to results from CoRoT.
The University of Exeter has one of the UK’s largest astrophysics groups working in the fields of star formation and exoplanet research. The group focuses on one of the most fundamental problems in modern astronomy – when do stars and planets form and how does it happen? They conduct observations with the world’s leading telescopes and carry out numerical simulations to study young stars, their planet-forming discs, and exoplanets. This research helps to put our Sun and the solar system into context and understand the variety of stars and planetary systems that exist in our Galaxy. Over the next three years, the University is investing £80 million in five areas of interdisciplinary scientific research, one of which is Extrasolar planets.
CoRoT – which stands for Convection, Rotation and planetary Transits – was developed by the French Space Agency CNES, with important contributions from Austria, Belgium, Brazil, the European Space Agency, Germany, and Spain. It was designed to detect tiny variations in the luminosity of stars, with two scientific goals: searching for planets orbiting stars other than the Sun, and studying the internal structure of seismology. |
Watch this awesome fluid dynamics lab demo, and then stick around for the science:
How fluids flow
Liquid flows are described in two regimes, laminar and turbulent. Laminar flow is a smooth, constant fluid motion, as if the fluid molecules were marching in-pace, single file. Turbulent flow is dominated by chaos, producing eddies and unstable vortexes in the fluid.
In a flow that is turbulent, eddies and vortexes form, and internal forces rise in the liquid. This aggressive flow is important to understand because the more turbulent the flow, the more friction is produced. So much friction is generated that engineers dealing with long distances, such as in the mile-long pipes that deliver water to your house, must account for it or the water would slow to a halt before it got to your faucet.
Turbulent flow is how we typically imagine the movement of fluids. Consider a bridge post in a river. I’m sure you have seen the result of the interaction: eddies and swirling waters trail behind the post.
Laminar flow is a much less chaotic case. Sometimes called “streamline” flow, fluids moving in a laminar fashion move like sheets or layers. There is no mixing between the sheets; no eddies, no vortexes. Laminar flow is like watching playing cards slide past each other.
Engineers describe the transition between laminar and turbulent flow with what is called the Reynolds number. This number is a ratio of the forces acting inside the fluid, namely inertia and viscosity. The higher the velocity, the higher the Reynolds number, and the greater the turbulence. The inertia that comes along with water speeding along a river, for example, overpowers the viscosity of the water (think “thickness” or internal friction), and characterizes what happens in chaotic river rapids. Conversely, the higher the viscosity, the lower the Reynolds number, and the lower the turbulence. For example, a thick substance like molasses tends to flow in sheets without much turbulence.
Below a Reynolds number of around 2000 the flow is laminar; any higher than that and the streamline flow gives way to turbulence.
What is happening in the video?
Like molasses, the corn syrup in the video above has a very high viscosity or “thickness.” Combine this with slow velocity imparted by the rotating handle, and you get a very low Reynolds number. So, the corn syrup in the video is flowing laminar, but how do the colored drops come back together when the flow is reversed?
When a fluid flows in sheets, encountering an obstacle can still create turbulence. Here is a look at a laminar fluid moving over a sphere (note the “streamlines”):
Even with laminar flow, we might expect the rotating handle in the video to mix up the drops beyond recognition. But it turns out that the video above is demonstrating a special case.
If a fluid is very viscous and is moving slowly enough, creeping flow can occur. With a Reynolds number of less than one, creeping flow indicates that the viscous forces far outweigh the inertial forces of the fluid’s motion. Like a marching band encountering an obstacle, in creeping flow the laminar sheets of fluid move around obstructions without breaking rank.
The corn syrup in the video above is in a creeping flow; the rotating handle creates no turbulence in it.
The bottom line
So, how do the colored drops in the video reform? Because we have a creeping flow situation, the corn syrup moves in (nearly) perfect parallel sheets. The handle rotates, the friction between the syrup and the handle moves the closest sheet, and that movement causes friction between the first sheet and the next closest sheet to the handle, which moves the next sheet, and so on.
Even after five turns of the handle, the creeping flow produces no mixing or turbulence. The amount of friction that was produced by turning the handle five times moves the syrup a certain amount, in separate sheets. So turning the handle back the opposite way in theory should produce the same amount of friction, this time moving the sheets back to their original positions. The demo realigns these sheets, restoring the drops to their original shape with hardly any distortion.
Think of it this way: each drop in the corn syrup is like a fully solved Rubik’s cube. When you begin, each side is a whole color (a whole drop). If you then turn each vertical third of the cube a certain amount, like mixing the corn syrup with the handle and moving the “sheets” around, you will end up with a jumbled cube. But if you turn the vertical thirds of the cube back the exact opposite way, you will end up where you started with a fulled solved cube. The drops in the demo above illustrate this beautifully. |
The Dead Sea is the lowest water in the world, and about 400 meters below sea level on average. The Dead Sea is located between Israel and Jordan, and is an inland salt lake. Jordan injects from the north. The Dead Sea is 80 kilometers long and 18 km wide, the lake surface area of 1020 square kilometers, the depth of 400 meters. Lisan Peninsula to the lake is divided into two different deeps of the size of the lake basin. The north area of 3/4, depth of 400 meters, and the average depth in the south is less than 3 meters.
The Dead Sea is continued in northern part of the east African rift valley, and here is a piece of sinking of the earth’s crust, sandwiched between two parallel between geological fault scarp. The Dead Sea is located in the desert, very little rainfall and irregular. The winter is without freezing, and summer is very hot, causing the water evaporation of about 1400 mm a year, and often filled with the fog on the lake. The inject 540 million cubic meters of water to the sea every year, there are four small but all the year round with water injection in the east river, because the summer evaporation capacity is big, the winter and water injection, so the Dead Sea water with seasonal changes, ranging from 30 to 60 centimeters.
The Dead Sea salt content is extremely high, the deeper of the see the higher salt content. In deep water saturated sodium chloride precipitate and become to fossilization. Due to the lake salt content is extremely high, and that’s the reason why we can float in the dead sea. Generally, sea salt content is 3.5%, and the Dead Sea is salt about 23% ~ 25%. On the surface of the water, 227 ~ 275 grams per liter of salt, so, the Dead Sea is a great salt library. It is estimated that about 13 billion tons of total Dead Sea salt. In the water, the fish is difficult to survive, the bank also does not have flowers and plants, so people call it the Dead Sea. But in recent years, scientists have found that the Dead Sea bottom sediment in the presence of algae and bacteria. |
Indo-Europeans are representatives of a language family now widely distributed all over the world, with primary concentration in Europe, the Middle East, and northern Asia. Sir William Jones, who emphasized the similarity of Sanskrit, Greek, Latin, Celtic, and the German language, introduced the term Indo-European in 1786.
Primarily this term was used to mark the similarity detected in the languages of a major part of the population of Europe, Iran, and India. From the 18th to 19th centuries the focus of such studies was shifted to the detection of similarities in German and other languages, which is why in 1823 I. Kapport introduced the term Indo-German. This was quickly replaced by the term proposed by Max Muller, Arian (Aryan). Since the second half of the 20th century the term Indo-European has replaced other versions.
The Indo-European family of languages in its contemporary understanding was designed in 1863 by August Schleicher as a peculiar genealogical tree, which reflects its wide distribution and its process of inner development and disintegration into dialects and new languages. This scheme is based on the assumption that common Indo-European pre-language was distributed first only in a restricted area, and in the course of time its transmitters settled all over Eurasia (in modern times also in America, Africa, and Australia) disseminating their language and culture.
Many contemporary linguists distinguish 10 branches in the Indo-European family of languages: Indo-Iranian, Slavonic and Baltic, Armenian, Anatolian, Albanian, Tokharian, Italic, Celtic, Germanic, and Hellenic. Every one of the aforementioned branches unites modern as well as “dead,” or extinct, languages used in the remote past by collectives known only by remnants of their artifacts and/or written sources, such as Sanskrit, Latin, Old Greek, Venetic, Old Persian, Lydian, and Mycenaean. Main branches of Indo-European languages are distributed unevenly in the contemporary world and sometimes could be subdivided into groups and subgroups with different numbers of languages.
Indo-European Homeland Identification
Searches for the place and time of origin of Indo-Europeans have been based on the assumption that linguistic and cultural similarity of the Indo-European family of languages is provoked by their connection with a common ancestor that lived in the remote past. Contemporary archaeology, cultural and physical anthropology, linguistics, and other neighboring sciences provide a wide variety of ideas and hypotheses, which can be divided into two groups: One tends to see common Indo-European ancestors as early agriculturists living mainly by land cultivation, while the second searches for the earliest Indo-Europeans among the nomadic population economy and mode of life based on cattle breeding and the exploration of domestic horses and wheeled vehicles.
Since the late 1960s new insights into the Indo-Europeans’ ancient homeland imply the convergent development of a series of neighboring language transmitters, which practiced mutual borrowing of terminology connected with the main field of their livelihood and subsistence (the so-called surge model). Another contemporary tendency of Indo-European homeland research tries to integrate a genealogical tree model with a theory of regional development of Indo-European languages. The latest developments are based on archaeological data.
Indo-Europeans As Nomads
V. G. Childe put forward the North Black Sea hypothesis of an Indo-European homeland in the mid-20th century. In spite of an apparent difference of backgrounds and arguments, this hypothesis was illustrated with data from Neolithic settlements of the region, implying identification of early Indo-Europeans as the first nomads. This may be the only reasonable explanation of the rapidity and scale of pre-Indo- Europeans’ dissemination over the Eurasian steppe and forest steppe region.
Valentin Danilenko also regarded nomadic impact (connected with the Seredniy Stig culture of the Lower Dniper region) as a crucial factor for Indo-Europeans’ spread to inner territories of Europe and the diversification of Indo-Europeans into several branches. In Ukraine Yuriy Rassamakin has also studied this, localizing a probable homeland for Indo-Europeans in the steppe zone between the Don and Danube Rivers. He identifies creators of Seredniy Stig culture as early Indo-Europeans, who conducted progressive forms of a cattle-breeding economy of pastoralist genre and lived in a neighborhood with non-Indo-Europeans. Many representatives of Soviet archaeology (Alexander Bryusov, Biktor Gening, Dmitry Telegin) regarded the Caucasus and the steppe landscapes of the northern Black Sea region as the most probable homeland of early Indo-European pastoralists.
Maria Gimbutas localized the Indo-European homeland in the Ural-Don steppes and tended to reference them with the so-called mound-grave culture circle, which includes different peoples with the only common feature in their funeral rites. She regarded Indo- Europeans as aggressive invaders whose attacks during the fifth millennium b.c.e. caused the destruction of prosperous agricultural centers in the Balkans, Asia Minor, central Europe, and Transcaucasia and, later, in the Aegean and Adriatic region. John Mallory proposed recently an original interpretation of the creators of the Pit-Grave (Yamnaya) culture as the earliest proto-Indo-Europeans. On the rich and extensive empirical (archaeological and linguistic) database he has illustrated movements of the Pit-Grave population to Siberia; the Near East; southeastern, central, and northern Europe; and other regions of Eurasia.
Most of the versions of nomadic interpretation of early Indo-Europeans are based on the assumption that dispersal of this population was relatively rapid and covered huge territories during a restricted period of time (fifth–third millenniums b.c.e.). It implies the development of effective transportation (such as horseback riding with the use of wheel carts for heavy items and belongings) and sparse settlement, with the highly developed funeral monuments that reflected complicated rites and customs. That is why traces of horse domestication, the origin of the wheel, and the construction of mound graves usually are regarded as the most reliable archaeological evidences of Indo-Europeans as nomadic.
Indo-Europeans As Early Agriculturists
Most of advocates of early Indo-Europeans as early agriculturists believe that the process of their formation should be viewed in a broad chronological frame beginning from the Mesolithic Period and transitioning to a productive economy. The spread is usually connected with the dispersion of farming skills, which implies the drawing of terminology and rites and customs, the sharing of the “oasis,” or monocentric, theory
of a transition to land cultivation and cattle breeding, and searches for the time and place of Indo-European origin in the origin of agriculture.
One of the most widespread in contemporary prehistory and archaeology understanding of pre-Indo-Europeans as early agriculturists was proposed in the late 1980s by Colin Renfrew. Localizing Indo- Europeans in central and eastern Anatolia as early as the middle of the eighth millennium b.c.e., he distinguishes 10 diffusions of Indo-Europeans to adjacent and relatively remote territories (including the Black Sea steppe region). Such diffusions were caused by the necessity to ensure facilities for an agricultural mode of life (first of all, land suitable for farming), which did not imply widespread human migrations: In Refrew’s understanding it was rather a gradual movement of individuals or their small family groups (approximately 1.6 miles per year), which caused a series of local hunter-gatherers to adapt to an agricultural mode of life. Soviet researcher Igor Diakonov, who localized the Indo-Europeans’ homeland in the Balkan and Carpathian regions, also indicated that their ancestors could have come from Asia Minor with their domesticated animals and plants. He dated this process at 5000–4000 b.c.e.
Russian archaeologist Gerald Matyushin believed that the only common Indo-European traits that could be traced and proved archaeologically are microlithic industry and the origin of a productive economy (land cultivation and cattle breeding). The earliest displays of both of these traits he localized in the Zagros Mountains and southern Caspian region, suggesting that agriculture distribution in Europe should be connected with the expansion and migration of Middle Eastern inhabitants to the north. European hunter-gatherers adopted agriculture together with appropriate rituals, rites, and spells, which were pronounced using the language of pioneers of land cultivation, ensuring the linguistic similarity of Indo-European peoples. His hypothesis is based on the mapping of microlithic technology, and the temporal and spatial distribution was later proved by the linguistic studies of T. Gamkrelidze and Vyacheslav the Ivanov. They suggest that the ancestral home of Indo-Europeans was located in the region of Lake Van and Lake Urmia, from whence they moved to Middle Asia, the northern Caspian region, and the southern Urals.
One more version of the agriculturist interpretation of early Indo-Europeans is the hypothesis that their origin lay in central Europe on the territory between the Rein, Visla, and Upper Danube. It was based on the correlation of Indo-European hydronimy with the distribution of the population connected with linear pottery culture, funnel beaker culture, globular amphora culture, and corded ware culture. G. Kossina, E. Mayer, P. Bosch-Gimpera, and G. Devoto shared this idea, which was actively discussed during the first half of the 20th century, especially by the Nazis. This discussion resulted in identification of pre-Germans (or pre-Indo-Germans) with Aryans who were regarded as transmitters of the highest cultural achievements in ancient civilization. This conclusion was broadly used by fascist propagandists as a justification for the genocide of the non-Aryan population practiced in Europe during World War II.
Synthetic, Or Compromise, Ideas
One of the earliest versions of a compromise was proposed in 1969 by Soviet archaeologist Vladimir Danilenko. He assumed that the roots of Indo-Europeans could be traced as early as 10,000–7000 b.c.e. on the border of Europe and Asia. By 5000 b.c.e., pre-Indo-Europeans (the population of Bug-Dnister, Sursko-Dniper, and linear pottery cultures) moved to the northwestern Black Sea region. He supposes the presence of at least two dialect zones in the pre-Indo-European homeland at that time: the western agricultural and eastern nomadic. Higher activity of the latter during the Neolithic had caused further disintegration of this dialectic unity and the relatively rapid spread of Indo-Europeans into the inner territories of Europe under the influence of the nomadic culture of Seredniy Stig. According to Danilenko, several branches of Indo-Europeans could already be traced at that time, among them the Tokharians (pit-grave culture), Indo-Iranians (Usatovo, Kemi-Oba, Lowe Mykhailivka cultures), proto-Thrakians, and proto-Daco-Mezians (representatives of the agricultural zone, including the Trypillie phenomenon).
Russian researcher Viktor Safonov proposed an original version of the history of the Indo-Europeans, which he divided into four periods, each with a particular homeland: 1) boreal period (pre-9000 b.c.e.) with no apparent traces of Indo-European separation from other languages; 2) period of early Indo-European language (8000–6000 b.c.e.) with the homeland in the western and central part of southern Anatolia (Chatal-Hujuk culture); 3) period of middle Indo-European language (6000–5000 b.c.e.) with the homeland Danubian region (Vincha culture); and 4) period of late Indo-European language (5000–3000 b.c.e.) during seven stages of which the final version of Indo-European homeland was shaped in the course of Lengyel and funnel beaker cultures dispersion. During the first half of the third millennium b.c.e. he tracked disintegration of Indo- European language unity into different language branches with relatively independent and self-sufficient histories.
Mikhail Andreev, who used “linguistic paleontology” based on studies of F. de Saussure, proposed a similar version of Indo-European language development. In his version three global stages of Indo-European language formation are distinguished: boreal, in the Late Paleolithic; early Indo-European, in the Mesolithic; and late Indo-European. He traces the primary homeland of Indo-Europeans to the vast spaces of Eurasia along the 50th parallel from the Rein River on the west to Altay on the east.
Other trends in the conceptualization of the Indo-Europeans’ homeland are connected with further development of needs to abandon the search for a narrow and strictly outlined territory where the earliest displays of Indo-European language and culture could be traced. Many linguists (Oleg Trubachev, Lev Gindin) as well as many archaeologists (Nikolay Merpert, Evgeniy Chernykh) believe in the possibility of the divergent and convergent development of languages, which does not necessarily imply the existence of any Indo-European pre-language.
Following the ideas of Nikolay Trubetskoy, Pizani, and others, the roots of contemporary Indo- European languages should be found in the environment of deeply interconnected dialects of the Neolithic—the Bronze Age, which gave birth to the primary Indo- European languages such as Greek, Sanskrit, Latin, and Celtic. In this sense all attempts to identify fi rst Indo-Europeans with any archaeological data are regarded as useless and contradicting with the basic principles of historical reconstruction.
Contemporary studies in the fi eld of Indo- Europeans’ homeland are concerned mainly with the Neolithic population of the European steppe region and imply that the homogeneity of the early pre-Indo-European family of languages was destroyed during the fourth millennium b.c.e. |
Thalassemia is an inherited blood disorder in which the body makes an abnormal form of hemoglobin. Hemoglobin is the protein molecule in red blood cells that carries oxygen. The disorder results in excessive destruction of red blood cells, which leads to anemia. Anemia is a condition in which your body doesn’t have enough normal, healthy red blood cells. Thalassemia is inherited, meaning that at least one of your parents must be a carrier of the disease. It’s caused by either a genetic mutation or a deletion of certain key gene fragments.
There are three main types of thalassemia (and four subtypes):
- Beta thalassemia, which includes the subtypes major and intermedia
- Alpha thalassemia, which include the subtypes hemoglobin H and hydrops fetalis
- Thalassemia minor
All of these types and subtypes vary in symptoms and severity. The onset may also vary slightly.
Beta thalassemia: Beta thalassemia occurs when your body can’t produce beta globin. Two genes, one from each parent, are inherited to make beta globin. This type of thalassemia comes in two serious subtypes: thalassemia major (Cooley’s anemia) and thalassemia intermedia.
Thalassemia major: It is the most severe form of beta thalassemia. It develops when beta globin genes are missing. The symptoms of thalassemia major generally appear before a child’s second birthday. The severe anemia related to this condition can be life-threatening. Other signs and symptoms include:
- Frequent infections
- A poor appetite
- Failure to thrive
- Jaundice, which is a yellowing of the skin or the whites of the eyes
- Enlarged organs
This form of thalassemia is usually to severe that it requires regular blood transfusions.
Thalassemia intermedia: It is a less severe form. It develops because of alterations in both beta globin genes. People with thalassemia intermedia don’t need blood transfusions.
Alpha thalassemia: Alpha thalassemia occurs when the body can’t make alpha globin. In order to make alpha globin, you need to have four genes, two from each parent.
This type of thalassemia also has two serious types: hemoglobin H disease and hydrops fetalis.
Hemoglobin H: It develops as when a person is missing three alpha globin genes or experiences changes in these genes. This disease can lead to bone issues. The cheeks, forehead, and jaw may all overgrow. Additionally, hemoglobin H disease can cause:
- An extremely enlarged spleen
Hydrops fetalis: It is an extremely severe form of thalassemia that occurs before birth. Most individuals with this condition are either stillborn or die shortly after being born. This condition develops when all four alpha globin genes are altered or missing.
Thalassemia minor: People with thalassemia minor don’t usually have any symptoms. If they do, it’s likely to be minor anemia. The condition is classified as either alpha or beta thalassemia minor. In alpha minor cases, two genes are missing. In beta minor, one gene is missing. The lack of visible symptoms can make thalassemia minor difficult to detect. It’s important to get tested if one of your parents or a relative has some form of the disease.
In 1954, Frank Ficarra was a young Italian-American businessman working and living in Brooklyn when two of his young children were diagnosed with a rare blood disease, Cooley’s anemia, also known as thalassemia major.
Frank Ficarra began organizing neighbourhood blood drives to make sure that his children and others like them would have the precious blood they needed to survive. Even though these blood drives were successful, Frank Ficarra realized that more was needed.
One autumn night, Frank Ficarra and the parents of other Cooley’s anemia patients met in the back of his Brooklyn butcher shop to discuss what they could do to help their children and let the world know about this rare disease. From that meeting, the seeds of the Cooley’s Anemia Foundation were sown.
Since that night, CAF has grown into a national and international force with an extraordinary record of accomplishments. CAF established the first Fellowship Program for thalassemia research and has become a strong voice in Washington for thalassemia patients and their families.
Prevalence of thalassemia
Most children with thalassaemia are born in low-income countries. Worldwide, transfusion is available for a small fraction of those who need it, and most transfused patients will die from iron overload unless an available and potentially inexpensive oral iron chelator is licensed more widely. The patients’ predicament underlines the need for combined treatment and prevention programmes. Wherever combined programmes exist survival is steadily improving, affected births are falling, and numbers of patients are stabilizing. The policy is spreading because of its demonstrable cost-effectiveness, and thalassaemia is gradually becoming contained.
WHO recommends the use of haemoglobin concentrations to assess prevalence of iron deficiency in a lower-income setting. However, the recommended cut-off values for haemoglobin concentrations are derived from populations of northern European origin and can lead to overestimation of iron deficiency where thalassaemias are common. The high global prevalence of thalassaemia means that each population should use their own baseline normal ranges in the assessment of iron deficiency.
Factors known for raising the risk are:
Family history: As mentioned earlier, thalassemia runs in families. The mutated hemoglobin genes are carried forward from the parents to their children. Therefore, having a family history raises your chance of thalassemia.
Specific ancestry: It is observed, that the blood disorder commonly occurs in African, Asian, Middle Eastern, Greek and Italian ancestry.
Thalassemia occurs when there’s an abnormality or mutation in one of the genes involved in hemoglobin production. You inherit this genetic defect from your parents.
If only one of your parents is a carrier for thalassemia, you may develop a form of the disease known as thalassemia minor. If this occurs, you probably won’t have symptoms, but you’ll be a carrier of the disease. Some people with thalassemia minor do develop minor symptoms.
If both of your parents are carriers of thalassemia, you have a greater chance of inheriting a more serious form of the disease.
The symptoms of thalassemia can vary. Some of the most common ones include:
· Bone deformities, especially in the face
· Dark urine
· Delayed growth and development
· Excessive tiredness and fatigue
· Yellow or pale skin
Diagnosis and tests
Most children with moderate to severe thalassemia receive a diagnosis by the time they are 2 years old. People with no symptoms may not realize that they are carriers until they have a child with thalassemia.
Blood tests: Blood tests can detect if a person is a carrier or if they have thalassemia.
A complete blood count (CBC): This can check levels of hemoglobin and the level and size of red blood cells.
A reticulocyte count: This measures how fast red blood cells, or reticulocytes, are produced and released by the bone marrow. Reticulocytes usually spend around 2 days in the bloodstream before developing into mature red blood cells. Between 1 and 2 percent of a healthy person’s red blood cells are reticulocytes.
Iron: This will help the doctor determine the cause of anemia, whether thalassemia or iron deficiency. In thalassemia, iron deficiency is not the cause.
Genetic testing: DNA analysis will show whether a person has thalassemia or faulty genes.
Prenatal testing: This can show whether a fetus has thalassemia, and how severe it might be.
Chorionic villus sampling (CVS): a piece of placenta is removed for testing, usually around the 11th week of pregnancy.
Amniocentesis: a small sample of amniotic fluid is taken for testing, usually during the 16th week of pregnancy. Amniotic fluid is the fluid that surrounds the fetus.
Treatment and medication
Treatment depends on the type and severity of thalassemia.
Blood transfusions: These can replenish hemoglobin and red blood cell levels. Patients with thalassemia major will need between eight and twelve transfusions a year. Those with less severe thalassemia will need up to eight transfusions each year, or more in times of stress, illness, or infection.
Iron chelation: This involves removing excess iron from the bloodstream. Sometimes blood transfusions can cause iron overload. This can damage the heart and other organs. Patients may be prescribed deferoxamine, a medication that is injected under the skin, or deferasirox, taken by mouth.
Patients who receive blood transfusions and chelation may also need folic acid supplements. These help the red blood cells develop.
Bone marrow, or stem cell, transplant: Bone marrow cells produce red and white blood cells, hemoglobin, and platelets. A transplant from a compatible donor may be an effective treatment, in severe cases.
Surgery: This may be necessary to correct bone abnormalities.
Gene therapy: Scientists are investigating genetic techniques to treat thalassemia. Possibilities include inserting a normal beta globin gene into the patient’s bone marrow, or using drugs to reactivate the genes that produce fetal hemoglobin.
In most cases, you can’t prevent thalassemia. If you have thalassemia, or if you carry a thalassemia gene, consider talking with a genetic counselor for guidance if you’re thinking of having children. |
Fiction or nonfiction? Real or make-believe? Fact or fantasy? These are all phrases for an important early learning concept: Could this really happen? And what better time to encourage child thought about make-believe events versus real-life occurrences than Groundhog Day!
You can introduce your child (or children) to this key idea by asking him or her to look at books you have in your home, classroom, or during a trip to the local library. Use books to begin the conversation, “Could this really happen?” We like to encourage children to sort books into 2 piles: “It Can Happen” and “It Can Not Happen.”
|This book can happen.|
|This book is pretend. There is a talking bear.|
After this, you can introduce your child (or children) to Groundhog Day. Tell the story of the little creature, Punxsutawney Phil. We encourage you to use the name, Punxsutawney, as our experience tells us that young children often delight in saying unusual multisyllabic words. Practice this with them and watch the joy when they share this name with others!
Discuss that a real groundhog will come out of its hole. Ask children if the groundhog can really tell us what the weather might be for the next several weeks. Some children may have valid reasons for answering yes. We encourage you to accept any reasoning. The key component is for children to verbalize support for a position. It is important to remember that children should learn to give reasons for what they think.
As you look forward to the big day, have children make their own groundhog, popping out of a hole. Use a cup for the groundhog burrow. I use white cups for snowy areas and brown cups for other spots. You can cover the cup with brown paper from a recycled bag as shown below.
|Cover any disposable cup to make it brown or white.|
Give children an outline of a groundhog or have them draw one of their own.
|Children can draw their own animal or you can help them.|
Cut out the animal. Next, poke a hole in the bottom of the cup and tape the groundhog to a craft stick or pencil. Your child can move the groundhog up and down to peek out of its hole. Remember, this is a valuable opportunity for children to experience the concept of up and down or in and out.
|The groundhog is in its hole.|
|The groundhog is out of its hole. It is up.|
Encourage verbal skills by asking children to explain Groundhog Day as they show their art creation. You can add to the fun by reading and rereading the following poem:
What animal gives us the weather report,
Right there from his snowy winter fort?
He wiggles his nose up from the ground,
And looks at the scenery all around.
If his shadow he suddenly sees,
Into his burrow this fellow flees.
Then snowy, icy winter stays,
For 42 more freezing days.
But if this creature runs about,
Then “Spring is here!” we all can shout!
Yes, the groundhog is legendary,
For weather advice in February! |
1. Served as the 15th President of the United States (1857–61), James Buchanan took office in 1857 by the time which the country was at the brink of a Civil War landing him among the least popular presidents` list. He was born on April 23, 1791, in Cove Gap, Pennsylvania. He was born on April 23, 1791, in Cove Gap, Pennsylvania. His father was James Buchanan Sr. , a merchant and his mother was Elizabeth Speer Buchanan. Buchanan is the only U.S. president who never married.
2. His political career was rich as he was elected 5 times to the House of Representatives, acted as a minister to Russia, served for over a decade in the Senate and acted under Polk as Secretary of State.
3. During his tenure as the minister to Britain, Buchanan helped draft the Ostend Manifesto in 1854 which would enable America to acquire Cuba from Spain but the bill was never acted upon but caused sparks in the anti-slavery territories.
4. Serving as a minister to Britain under President Pierce`s administration earned him an edge among his peers and he soon became a Democratic nomination for presidency in 1856. His inefficient handling of the slavery issue caused him his popularity.
5. After joining office, Buchanan tried to maintain peace between the pro-slavery and anti-slavery supporters but his mishandling only increased the tension as he was seen by many as a firm sympathizer and supporter of the southerners.
6. Buchanan`s agendas were suppressed even more when he supported Lecompton Constitution which would make Kansas a slave state which was later turned down. This sown a bitter seed between the Congress and the President which later only intensified.
7. He did not seek reelection in 1860 but by the time he left office the attack of Confederate forces in South California marked the beginning of the Civil War. He retired to Wheatland while he thoroughly supported Lincoln`s policies. He died on June 1, 1868, at age 77. |
When metamorphic rocks undergo pressure they are changed, but how are they changed? Do they become fragments of rocks? Doesn't that mean that the difference between sedimentary rocks and metamorphic rocks (when talking about fragment rocks) is that sedimentary rocks undergo weathering, which makes them fragments and metamorphic rocks undergo immense pressure which makes it into fragments?
Metamorphic rocks are changed by transformations deep underground. Being deep underground there is immense pressure and heat.
The transformations can be just crystal size of the particular mineral, or different minerals can be in fact formed. For a particular mineral there are also may be different crystal structures which depend on the pressure and temperature at which the crystal was formed. The different compositions and crystal structures would be shown on geochemical phases diagrams. Geochemical modeling could be used to predict the various reactions based on the temperature and pressure profile to which the material is subjected.
Metamorphic rocks are formed when a rock (sedimentary, igneous or a previous metamorphic rock) comes under high pressure and/or temperature. Pressure and temperature forces the atoms to form new minerals and thereby a new kind of rock. It's not necessary fragmented, but the rock rather morph through recrystallization into a new state in response to the pressure and temperature as the material reorganize.
Recrystallization doesn't remove any material, it's only a physical reorganization to compact the rock. If some change of the chemical composition occurs, it's called metasomatism. In metasomatism atoms are actually moved from one part of the rock or formation, often with water involved.
A good petrologist can usually determine what kind of parent rock, protolith, that have been metamorphosed and to what degree, but sometimes it can be difficult do recognize the difference. At lower degrees of metamorphism, features of the original rock is preserved. E.g. you can see ripples and even fossils in slates or deformed structures from the source rock, but at higher degrees of metamorpism it's increasingly difficult to imagine how the parent material looked like. Gneiss formed from sandstone and granite can look very similar as the lihology represents the conditions of metamorphism, not the protolith.
It can also be difficult to recognize a metamorphic rock. A weathered slate and shale or even granite and gneiss can appear very similar in the field. However, the processes to form the rocks are different and with a closer look at the minerals it can be possible to understand what kind of process that formed the rock.
(1, Blueschist facies 2, Eclogite facies 3, Prehnite-pumpellyite-facies 4, Greenschist facies 5, Amphibolite-facies 6, Granulite facies 7, Zeolite facies 8, Albite-epidote-hornfels facies 9, Hornblende-hornfels facies 10, Pyroxene-hornfels facies 11, Sanidinite facies)
The temperature and pressure determine what metamorphic rock you get, but the chemical composition is inherited from the protolith. Degrees of metamorphism are called metamorphic facies. This diagram shows at what depth and temperature a particular rock is formed.
Igneous rocks are formed when a melt hardens to crystals and sedimentary rocks are formed from sediments. All rocks weather if exposed to water and air, and weathered material is transported by rivers to oceans where sandstone can be formed from the sand at the beach and shale from the finer sediments further away from the coast. Igneous rocks can be formed as hot magma intrudes the crust and slowly cools down and minerals are formed or form lava at volcanoes. This is known as the rock cycle and is one of the fundamental, but complicated, concepts in geology. You can read more about the rock cycle e.g. here to learn more about rock types and how they are formed.
The existing answers are correct but I think they miss an important aspect of your question, which is the effect of pressure.
Do they become fragments of rocks?
No. Fragmentation of rocks is a relatively low temperature and pressure process. This is the kind of stuff you would see in near-surface environments. Once rocks become hot under pressure, they are no longer brittle but are ductile. This means they can bend and flow. Think of chocolate: put it in the freezer and it's hard as a rock, but if you take it out (when it's still solid) it becomes easier to bend it and shape it with your hands.
Same with rocks. A very common feature of metamorphic rocks is folding:
Doesn't that mean that the difference between sedimentary rocks and metamorphic rocks (when talking about fragment rocks) is that sedimentary rocks undergo weathering, which makes them fragments and metamorphic rocks undergo immense pressure which makes it into fragments?
No. The immense pressure actually works well to hold everything together instead of fragment them. There is some fragmentation in metamorphic rocks, but it's usually manifested as faulting rather than fragmentation into pieces (also known as brecciation). Here's an example:
Metamorphic can have been halfway to the state of lava for a short time, it may have been like quite tough dough of bread. You can imagine the viscosity of it to be the same as ordinary glass which is at 200 degrees, if you have a window at 200 degrees, you can bend it into a U shape in a time ranging from 20 minutes to 20 seconds depending on the temperature it is at, it becomes more malleable. instead of sand, metamorphic is hot limestone/clay/pebbles...
The dough slowly moves and deforms, for example by pressure of buckling tectonics from distant tectonic shocks like rising mountain chains. The chemicals inside the dough disamalgamate and coagulate into chemicals that have easy bond affiliation. Every chemical reacts differently and has more or less strong bonds at a set pressure of metamorphism, different blobs inside the metamorphic rock may be more or less mobile and viscous. You get crystals of different sizes of new materials, more often roundish, sometimes flat (gneiss) indicating the flow/pressure in the dough, the crystals branch out not that differently from ice crystals, they recrystallize re-mineralize. The time and pressure of metamorphosis vary a lot and determine the level of change that happens. |
What was the most important date in post-war European history? Some historians would say all European history as it changed Europe for ever.
Who made the real turning point where his government created a democratic solution for the whole Continent, its politics, economics and its defence?
Robert Schuman in 1948!
The distinguished French professor of history, Jean-Baptiste Duroselle said that the date of 20 July 1948 must be considered as the real turning point of European history. It was a new point of departure.
“For the first time a government officially presented a project aimed at the construction of Europe. While the idea of supranationality was not clearly delineated, it seems that the project implied it. Before 1914, Europe was only conceived in terms of equilibrium or balance of power.”
At the start of the 20th century, the system of alliances Europe into two blocs, he wrote. The European equilibrium, was, as US President Wilson stated, the deep cause of the Great War. The interwar initiative of Briand tried to shape an entity called Europe within the global system of the League of Nations. Aristide Briand did not propose something new. To attribute the paternity of a governmental initiative for present-day European construction to Briand would be to commit a dangerous anachronism, warned Duroselle.
After World War II ended, Europeans started to turn their minds to rebuilding the ruins of broken cities and industries, massive debts, hyperinflation and devalued currencies. But they were immediately faced with other matters of life and death. The Soviet Union, USSR, occupied eastern and central Europe. During the war, Communist party cadres from Germany, Poland, Hungary and other countries were trained in Moscow about how to seize power at war’s end. They knew where the main levers of power were in each country and how to subvert parliaments even with a small Communist party.
“From Stettin in the Baltic to Trieste in the Adriatic an iron curtain has descended across the Continent. Behind that line lie all the capitals of the ancient states of Central and Eastern Europe. Warsaw, Berlin, Prague, Vienna, Budapest, Belgrade, Bucharest and Sofia, all these famous cities and the populations around them lie in what I must call the Soviet sphere, and all are subject in one form or another, not only to Soviet influence but to a very high and, in some cases, increasing measure of control from Moscow. … The Communist parties, which were very small in all these Eastern States of Europe, have been raised to pre-eminence and power far beyond their numbers and are seeking everywhere to obtain totalitarian control. Police governments are prevailing in nearly every case, and so far, except in Czechoslovakia, there is no true democracy.”
“The safety of the world, ladies and gentlemen, requires a new unity in Europe, from which no nation should be permanently outcast. It is from the quarrels of the strong parent races in Europe that the world wars we have witnessed, or which occurred in former times, have sprung.”
A few days later on Bastille Day that year Churchill met with Robert Schuman in Metz, France and delivered his first great European speech. Schuman was then Minister of Finance for France. France was in deep danger of being sucked into the Soviet sphere. The French Communist party was the country’s largest. It tried to take over parliament. US diplomats warned President Truman that France too could fall. But by late 1947, Schuman had become Prime Minister. He showed iron-willed opposition to Communist threats, revolutionary strikes and sabotage.
Schuman also prevented a future war by gradually changing the nationalistic policies of the Gaullists and others who wanted a land-grab of territory up to the Rhine. De Gaulle was no longer in power but was still a powerful nationalistic influence in parliament and in mass rallies. De Gaulle’s followers tried opportunistically to bring down the Schuman government by voting and working in lock-step with the Communists’ insurrection.
In the last days of his first government, Schuman made a decisive step that has affected all Europeans ever since. First, his government, working with UK’s foreign Minister, Ernest Bevin, created a defensive pact, known as the Brussels Treaty Organization. Ostensibly, the pact of France, UK and the Benelux countries was to guard against further German aggression. Schuman’s foreign minister Georges Bidault was very nervous about openly declaring it was to prevent Soviet invasion. Schuman much less so.
On 19-20 July 1948 Bidault delivered Schuman’s message to the foreign ministers of the Brussels Pact, meeting in The Hague. It astounded them too. Bidault described Belgium’s foreign minister, Paul-Henri Spaak as a man hard to surprise. On making his speech, Bidault said his eyes were extraordinarily round with shock.
“We are at a moment, perhaps unique in history, where it is possible to create Europe,” said Bidault.
He made two propositions.
The first proposition was to create a European parliamentary assembly. It would be made up initially of parliamentarians of national bodies and also open to other nations who wished to apply. The second was for an economic and customs union for the six countries, to which other nations could apply to join.
“Thus in the economic sphere the Common Market was created and from the political perspective, the Assembly of the Council of Europe in Strasbourg. In spite of all later obstacles and violent opposition to these two ideas, both of them have flourished,” Bidault wrote.
A third major institution arose out of Schuman’s initiative at the Brussels Pact. Washington required a demonstration of Europeans’ willingness to defend itself before it could politically commit its forces to Europe again. With the Berlin blockade that year, and following Schuman’s lead, talks began with USA and Canada to create NATO, the North Atlantic Treaty Organization. It entered force around the same time as the Council of Europe began its sessions in summer 1949.
This year 2018 represents the 70th birthday of that positive turning point, #EU70.
David Heilbron Price |
Students connect polynomial arithmetic to computations with whole numbers and integers. Students learn that the arithmetic of rational expressions is governed by the same rules as the arithmetic of rational numbers. This unit helps students see connections between solutions to polynomial equations, zeros of polynomials, and graphs of polynomial functions. Polynomial equations are solved over the set of complex numbers, leading to a beginning understanding of the fundamental theorem of algebra. Application and modeling problems connect multiple representations and include both real world and purely mathematical situations.
Module 2 builds on students previous work with units and with functions from Algebra I, and with trigonometric ratios and circles from high school Geometry. The heart of the module is the study of precise definitions of sine and cosine (as well as tangent and the co-functions) using transformational geometry from high school Geometry. This precision leads to a discussion of a mathematically natural unit of rotational measure, a radian, and students begin to build fluency with the values of the trigonometric functions in terms of radians. Students graph sinusoidal and other trigonometric functions, and use the graphs to help in modeling and discovering properties of trigonometric functions. The study of the properties culminates in the proof of the Pythagorean identity and other trigonometric identities.
In this module, students synthesize and generalize what they have learned about a variety of function families. They extend the domain of exponential functions to the entire real line (N-RN.A.1) and then extend their work with these functions to include solving exponential equations with logarithms (F-LE.A.4). They explore (with appropriate tools) the effects of transformations on graphs of exponential and logarithmic functions. They notice that the transformations on a graph of a logarithmic function relate to the logarithmic properties (F-BF.B.3). Students identify appropriate types of functions to model a situation. They adjust parameters to improve the model, and they compare models by analyzing appropriateness of fit and making judgments about the domain over which a model is a good fit. The description of modeling as, the process of choosing and using mathematics and statistics to analyze empirical situations, to understand them better, and to make decisions, is at the heart of this module. In particular, through repeated opportunities in working through the modeling cycle (see page 61 of the CCLS), students acquire the insight that the same mathematical or statistical structure can sometimes model seemingly different situations.
Students build a formal understanding of probability, considering complex events such as unions, intersections, and complements as well as the concept of independence and conditional probability. The idea of using a smooth curve to model a data distribution is introduced along with using tables and techonolgy to find areas under a normal curve. Students make inferences and justify conclusions from sample surveys, experiments, and observational studies. Data is used from random samples to estimate a population mean or proportion. Students calculate margin of error and interpret it in context. Given data from a statistical experiment, students use simulation to create a randomization distribution and use it to determine if there is a significant difference between two treatments.
In this module, students reconnect with and deepen their understanding of statistics and probability concepts first introduced in Grades 6, 7, and 8. Students develop a set of tools for understanding and interpreting variability in data, and begin to make more informed decisions from data. They work with data distributions of various shapes, centers, and spreads. Students build on their experience with bivariate quantitative data from Grade 8. This module sets the stage for more extensive work with sampling and inference in later grades.
In earlier grades, students define, evaluate, and compare functions and use them to model relationships between quantities. In this module, students extend their study of functions to include function notation and the concepts of domain and range. They explore many examples of functions and their graphs, focusing on the contrast between linear and exponential functions. They interpret functions given graphically, numerically, symbolically, and verbally; translate between representations; and understand the limitations of various representations.
In earlier modules, students analyze the process of solving equations and developing fluency in writing, interpreting, and translating between various forms of linear equations (Module 1) and linear and exponential functions (Module 3). These experiences combined with modeling with data (Module 2), set the stage for Module 4. Here students continue to interpret expressions, create equations, rewrite equations and functions in different but equivalent forms, and graph and interpret functions, but this time using polynomial functions, and more specifically quadratic functions, as well as square root and cube root functions.
Module 1 embodies critical changes in Geometry as outlined by the Common Core. The heart of the module is the study of transformations and the role transformations play in defining congruence. The topic of transformations is introduced in a primarily experiential manner in Grade 8 and is formalized in Grade 10 with the use of precise language. The need for clear use of language is emphasized through vocabulary, the process of writing steps to perform constructions, and ultimately as part of the proof-writing process.
Just as rigid motions are used to define congruence in Module 1, so dilations are added to define similarity in Module 2. To be able to discuss similarity, students must first have a clear understanding of how dilations behave. This is done in two parts, by studying how dilations yield scale drawings and reasoning why the properties of dilations must be true. Once dilations are clearly established, similarity transformations are defined and length and angle relationships are examined, yielding triangle similarity criteria. An in-depth look at similarity within right triangles follows, and finally the module ends with a study of right triangle trigonometry.
Module 3, Extending to Three Dimensions, builds on students understanding of congruence in Module 1 and similarity in Module 2 to prove volume formulas for solids. The student materials consist of the student pages for each lesson in Module 3. The copy ready materials are a collection of the module assessments, lesson exit tickets and fluency exercises from the teacher materials.
In this module, students explore and experience the utility of analyzing algebra and geometry challenges through the framework of coordinates. The module opens with a modeling challenge, one that reoccurs throughout the lessons, to use coordinate geometry to program the motion of a robot that is bound within a certain polygonal region of the planethe room in which it sits. To set the stage for complex work in analytic geometry (computing coordinates of points of intersection of lines and line segments or the coordinates of points that divide given segments in specific length ratios, and so on), students will describe the region via systems of algebraic inequalities and work to constrain the robot motion along line segments within the region.
This module brings together the ideas of similarity and congruence and the properties of length, area, and geometric constructions studied throughout the year. It also includes the specific properties of triangles, special quadrilaterals, parallel lines and transversals, and rigid motions established and built upon throughout this mathematical story. This module's focus is on the possible geometric relationships between a pair of intersecting lines and a circle drawn on the page.
In this first module of Grade 1, students make significant progress towards fluency with addition and subtraction of numbers to 10 as they are presented with opportunities intended to advance them from counting all to counting on which leads many students then to decomposing and composing addends and total amounts.
Module 2 serves as a bridge from students' prior work with problem solving within 10 to work within 100 as students begin to solve addition and subtraction problems involving teen numbers. Students go beyond the Level 2 strategies of counting on and counting back as they learn Level 3 strategies informally called "make ten" or "take from ten."
Module 3 begins by extending students kindergarten experiences with direct length comparison to indirect comparison whereby the length of one object is used to compare the lengths of two other objects. Longer than and shorter than are taken to a new level of precision by introducing the idea of a length unit. Students then explore the usefulness of measuring with similar units. The module closes with students representing and interpreting data.
Module 4 builds upon Module 2s work with place value within 20, now focusing on the role of place value in the addition and subtraction of numbers to 40. Students study, organize, and manipulate numbers within 40. They compare quantities and begin using the symbols for greater than (>) and less than (<). Addition and subtraction of tens is another focus of this module as is the use of familiar strategies to add two-digit and single-digit numbers within 40. Near the end of the module, the focus moves to new ways to represent larger quantities and adding like place value units as students add two-digit numbers.
In Module 5, students consider partwhole relationships through a geometric lens. The module opens with students identifying the defining parts, or attributes, of two- and three-dimensional shapes, building on their kindergarten experiences of sorting, analyzing, comparing, and creating various two- and three-dimensional shapes and objects. Students combine shapes to create a new whole: a composite shape. They also relate geometric figures to equal parts and name the parts as halves and fourths. The module closes with students applying their understanding of halves to tell time to the hour and half hour.
In this final module of the Grade 1 curriculum, students bring together their learning from Module 1 through Module 5 to learn the most challenging Grade 1 standards and celebrate their progress. As the module opens, students grapple with comparative word problem types. Next, they extend their understanding of and skill with tens and ones to numbers to 100. Students also extend their learning from Module 4 to the numbers to 100 to add and subtract. At the start of the second half of Module 6, students are introduced to nickels and quarters, having already used pennies and dimes in the context of their work with numbers to 40 in Module 4. Students use their knowledge of tens and ones to explore decompositions of the values of coins. The module concludes with fun fluency festivities to celebrate a year's worth of learning.
Module 1 sets the foundation for students to master the sums and differences to 20 and to subsequently apply these skills to fluently add one-digit to two-digit numbers at least through 100 using place value understandings, properties of operations and the relationship between addition and subtraction.
In this 12-day Grade 2 module, students engage in activities designed to deepen their conceptual understanding of measurement and to relate addition and subtraction to length. Their work in Module 2 is exclusively with metric units in order to support place value concepts. Customary units will be introduced in Module 7. |
Reading time ( words)
Trillions of sensors are in our future, and they will need energy. Batteries are routinely used to power tiny devices, but there are other options. Piezoelectricity, the technology that converts mechanical energy into electricity, is gaining attention these days because it can scavenge energy from movement or vibrations.
For this reason, Carnegie Mellon University researchers are exploring the use of piezoelectricity for smart city applications. Smart cities of the future will rely on massive sensor networks, and the sensors in these systems need energy. Continually replacing sensor batteries would be extremely time consuming and produce waste materials that would be difficult to dispose.
"It would be a lot more efficient if you could just live off of scavenged energy. You eliminate batteries and their problems, and instead you harvest energy," said Gianluca Piazza, a professor of electrical and computer engineering and the director of the John and Claire Bertucci Nanotechnology Laboratory.
While other researchers extract energy from solar, heat, and mechanical vibrations, Piazza's team focuses on powering devices with ultrasound. They launch sound waves that transfer over relatively long distances and are captured by tiny piezoelectric devices co-located with sensors, and hence, remotely powering the sensors.
"So you have a power source somewhere, and you have all the sensors. Whenever you need to power them or interrogate them, you just send this blast of sound waves to them. They receive it, and they turn on," Piazza said.
Because these sound waves are a bit more than 40 kilohertz — right above the audible range — they do not bother humans or animals. They can efficiently transmit over 10-30 meters, which is around 30-100 feet.
Piazza's research currently is designed for indoor applications. Take a conference room as an example. A large speaker would send out sound waves to sensors distributed in the room. These sensors, which are about the size of a grain of sand, have membranes that vibrate and generate a charge when they receive the waves.
"It's like the same way when you're moving your foot in your shoe, you're actually stimulating the piezoelectric material and generating a charge," Piazza said.
Piazza's system generates enough electricity to power small radio devices that send and receive signals. Currently, the power source that launches the sound waves needs to be plugged in. Piazza's team would like to further develop the system so they can launch sound waves without the need of plugged-in units. To this end, researchers at Carnegie Mellon and elsewhere are exploring novel piezoelectric materials that can be used to harvest energy, which could be beneficial for indoor communications, smart infrastructure, and implantable or wearable devices. |
Making Sedimentary Rocks!
|Students make a model of sedimentary rock layers to understand how rocks form layers and represent ancient environments.||Materials:
For each group:
|Windows to the Universe staff member Lisa Gardiner based this activity on one by teacher/naturalist Edith Sisson (author of Nature with Children of All Ages).|
|50 minutes or several class periods|
Student Learning Outcomes:
|Hands on activity or demonstration with participation|
- Review what a sedimentary rock is. Review common types of sedimentary rocks (sandstone, conglomerate, shale and limestone).
- Have students stack papers on their desk. Ask them which paper got there first (A: the one on the bottom). Sedimentary rocks form in the same way, in layers, with the older ones at the bottom.
- Tell class that during this project they will simulate (or model) what happens over hundreds of thousands to millions of years as sedimentary rocks are formed in layers in different environments.
- Discuss what a model is. (Examples of models: model airplane, dolls, dinosaur model, video games)
- Divide students into groups of about 4.
- For each of the environments in the table below (river, beach, shallow and deep ocean):
- Have students describe from their experience what the environment is like. What sorts of things do they think they would see there?
- After describing an environment, have student groups choose which of the materials they would include in their milk carton to represent that environment (these items are listed in the second and third columns of the table).
- Have students fill one of their cups about 2/3 full of the appropriate sediment and associated fossils.
- Mix plaster with water according to manufacturers directions. Have each student group fill the remainder of their cup with plaster and stir. Explain that this is much faster than rocks are actually made. The plaster acts like the cement that holds real sedimentary rocks together.
- Have each group put sediment mixed with plaster into their milk carton and pat it down to form a flat layer.
- Start the next environment in the table by the same process. Make sure that student groups do not mix different layers or shake their milk carton. Mix plaster in small batches (one for each environment) to avoid it drying too quickly. For the limestone layer, mix plaster a little more watery than usual because chalk will absorb water. The plaster of the first layer does not need to be dry before adding the next. If it is really soupy, sprinkle a little dry plaster on the top before adding the next layer.
- After plaster has dried (about 20 minutes), take the layers of sedimentary rock out of the milk carton. (You may need to rip the milk carton off!)
- Have student groups rub it lightly with very fine sand paper and draw what the layers of "rock" look like in their notebook (noting colors, textures, and other features in the margins of their picture). Show them images of real rock layers from places like the Grand Canyon, southern Utah, or something closer to home.
- If your class has already covered types of sedimentary rocks, ask students to identify the types of sedimentary rocks present in their model, even though they are not real.
- Ask students to recall which types of environments each rock type represents. If the environment in this one spot changed over time from a river to a beach to a shallow ocean to a deep ocean what must have happened? Sea level rise!
- Extension: Have students be paleontologists and dig for fossils in the layers of rock. Where would you expect to find the most clamshell fossils? Fish fossils? Use picks, chisels and small hammers to find them.
EXTENSIONS:Have students be paleontologists and dig for fossils in the layers of rock. Where would you expect to find the most clamshell fossils? Fish fossils? Use picks, chisels and small hammers to find them.
Sea level changes can be caused when either the land level sinks (called subsidence) or when the water level rises, or when both processes are happening together. Water level can rise because glaciers melt, adding water to the oceans, or when plate tectonic movements shallow the ocean basins displacing water onto the edges of continents. It is a natural process that has gone on since there have been oceans on Earth!
This activity works best when students have already reviewed types of sedimentary rocks (conglomerate, sandstone, shale, and limestone). Note that the same rock types can form in several different environments. This is a good topic of discussion, especially if students recognize that the soil is potting soil found on land. Shale that forms in swampy floodplain areas can look very much like shale that is from the ocean floor or even shale from a lake bottom. Fossils are a good way to tell the difference. Similarly, sand dunes formed in the desert are made out of sandstone just like the beach sand (and not all beaches are made of sand). One must be a detective to figure out what past environments were like!
For a shorter demonstration version of this activity, omit the plaster and milk cartons and tell students the story of changing environments as you add layers of sediment and "fossils" to a rectangular fish tank (or any container that you can see through). They are able to see the layers right away, although the connection to sedimentary rocks might be more of a challenge.
|Environment||Type of Sediment||Biological remains you might find there||Rock type produced|
|Bottom of the shallow ocean||Silt/mud||shells fish||shale|
|Bottom of the deep ocean||Crushed white chalk||few shells, fish||limestone|
RELATED SECTIONS OF THE WINDOWS TO THE UNIVERSE WEBSITE: |
The first people to inhabit Britain, the ones responsible for such megalith monuments as Stonehenge, arrived between 35,000-10,000 years ago. From 1,500-500 BC, Celtic tribes from the continent came to settle in Britain and began mixing with the indigenous groups that were already there.
In an attempt to expand their empire, the Romans began to travel to Britain beginning in the first century AD. Control over southern Britain was gained relatively smoothly and by 79 AD, what is now England and Wales were under Roman rule. Northern Britain, however, more specifically, the early Scots were much harder to manage. Despite some significant losses at the hands of the Romans, the Caledonian tribes remained fiercely resilient. It has been suggested that Hadrian’s Wall, the northernmost border of the Roman Empire, was built to protect the south from northern inhabitants.
When the Roman Empire began to collapse in the 5th century, Roman inhabitants of Britain were forced home, leaving Celtic tribes alone. Soon after, fighting broke out amongst the Celtic people and Germanic tribes (the Angles, Saxons and Jutes) arrived to take control. Unlike the Romans, the Germanic groups did not return home, instead establishing six kingdoms. These kingdoms, known as the Anglo-Saxon Heptarchy ruled Britain from 500-850 AD.
The 10th century signaled the arrival of the Danes to northeast England where they adopted the French feudal system and language. In the 11th century, Norman King Edward nominated William, the Duke of Normandy as his successor. Despite this nomination, upon the death of Edward, the Earl of Wessex, Harold Godwinson declared himself to be king. The ensuing fight was the Battle of Hastings in 1066 in which William the Conqueror became William I of England.
The 12th and 13th centuries saw Britain involved in civil wars with the throne consistently contested. One of the most notorious battles was between English Edward “Longshanks” and Scottish William Wallace and Robert the Bruce in which the Scots were the victors.
With the 16th century came one of Britain’s most famous monarchs, Henry VIII. His Act of Union made him the first ruler to declare himself king of both England and Wales. In 1553, he divorced his Catholic wife, Catherine of Aragon in favour of Anne Boleyn. As a result, the pope excommunicated Henry, who went on to name himself the head of the Church of England.
Henry’s daughter, Elizabeth, was crowned queen six years after his death. Her reign has been classified as the first Golden Age of England. Her successor was James VI, a Catholic, and the son of Mary Queen of Scots, making James both king of Scotland and England. The divide between Protestants and Catholics widened resulting in James’ successor, Charles I to unite Britain and Ireland. This resulted in the English Civil War where Oliver Cromwell, who beheaded Charles I, went on to rule as a dictator. It wasn’t until 1707 that another Act of Union joined Scottish and English parliaments to create a single kingdom.
For Britain, the 19th century was a time of great expansion. Under Queen Victoria, Britain had interests on every continent and she ruled over 40% of the globe and one-quarter of the world’s population. Britain entered the First World War in 1914, fighting until the end in 1918. A devastating time for all of Europe, Britain was not exempt. Following the war, the Labour Party came into being, assuring better rights for workers and universal suffrage in 1928.
Britain declared war on Germany after the invasion of Poland in September 1939. This was a war fought on the continent and at home as many cities in Britain were bombed heavily. Winston Churchill became Prime Minister during the war and was responsible for galvanizing Brits at home and at the front. Following the war, Britain was bankrupt, forcing their vast empire to be dismantled. Britain had no choice but to grant independence to their colonies.
Queen Elizabeth took the throne in 1952. The next decade saw an outburst of British culture through music, art and film. The 1970s contended with an oil crisis, which greatly impacted British industry. Margaret Thatcher became prime minister in 1979 until 1990 and remains a controversial member of British history to this day.
Britain remains an incredibly popular tourist destination for those who love history, literature, art, sport and more.
Get a Trip Quote Order a Brochure |
Ever since it achieved unification in 1871, Germany craved colonies as a matter of national pride. But by the late nineteenth century, most of the ‘uncivilised world’ was already carved up by established European powers. In an eleventh-hour effort, the German Empire acquired a few scraps of Africa and Asia – mainly wild or empty lands nobody else wanted. And even this colonial empire, with the bits few and far between, was taken away after Germany’s defeat in the First World War.
The revanchist mood that swept the Nazis into power in the early nineteen thirties also revived Germany’s by now totally outdated colonial ambitions. Those were turned to the last great area of the globe that was not yet colonized: Antarctica – big, cold and empty. At the beginning of 1939, a Nazi expedition explored a hitherto uncharted area of the Antarctic. By foot and plane, the Nazis surveyed an area between latitudes 69°10’ S and 76°30’ S and longitudes 11°30 W and 20°00’ E, totaling 600.000 sq. km. They called it Neuschwabenland, or New Swabia.
At first glance, Neuschwabenland doesn’t warrant much enthusiasm. Most of it is covered in eternal snow and ice, with only a few places ice-free, mainly around a few hot springs. Yet annexation was an express purpose of the expedition, led by captain Alfred Ritscher, ordered by Hermann Göring himself. Before leaving, the expedition members received practical advice from Richard E. Byrd, an American admiral and experienced polar explorer.
The German airline Lufthansa lent one of its ships, the ‘Schwabenland’ for the expedition – hence the name that was given to the territory. The vessel was a so-called ‘catapult ship’, having before proved itself as a transporter and postal carrier in the South Atlantic. The ‘Schwabenland’ had two Dornier aircraft on board, named Boreas and Passat. A steam catapult was used in flinging the planes, each weighing 10 tonnes, off the ship.
The planes were used for reconnaissance flights over the impassable hinterland of the heretofore unexplored part of Antarctica, and were thus instrumental in the German Antarctic Expedition. Each plane could stay in the air for a maximum of nine hours and no inland airfields were constructed, so this provided the outer limit for the area to be explored.
In total, 350.000 sq. km were overflown and more than 11.000 photographs taken during 15 flights. These pictures were used in drawing up a map of the territory. During the flights and expeditions on foot, hundreds of Nazi German flags were dropped to symbolize Germany’s possession of the territory. Additionally, the expedition established a provisory base camp and reported that around the so-called Schirmacher See there existed some vegetation, due to the hot springs near the lake.
Capt. Schirmer was prevented from mounting a second, improved expedition by the outbreak of World War Two. During the war, no official activities were registered in the whole of Antarctica. After the war, Norway assumed a protectorate over the area, annexing it to Queen Maud Land. Following the 1957 Antarctic Treaty (the one ‘freezing’ all territorial claims), Norway named its new acquisition after princesses Martha, Raghnild and Astrid.
In 1952, the government of the new Federal Republic of Germany exercised its right, based on the Nazi exploration, to name geographical features in the area. The German polar research station ‘Georg von Neumayer’ is located in what was formerly known as Neuschwabenland. Thus endeth the official version.
A plethora of rumours maintains that Neuschwabenland wasn’t abandoned by the Nazis after the first expedition. In fact, a few crew members of the ‘Schwabenland’ stated that they made several trips to the Nazis’ Antarctic colony, transporting military equipment and heavy tools for mining and tunneling. This must be the origin of the legend that several submarines filled with top-level Nazis fled Europe as the war was ending, finding refuge in a secret network of underground bunkers in Neuschwabenland.
Some stories even maintain that this little Nazi hideaway is the real origin of UFOs (or rather Reichsflugscheiben) – as they really are a German invention rather than an extraterrestrial one. |
The knee is the largest joint in the body. It is a specialised hinge joint made up of four main things: bone, cartilage, ligaments and tendons. The main movement that occurs in the knee joint is flexion and extension. It does however allow for a small amount of rotation and gliding on the surfaces.
The patella is the small bone located at the front of the knee joint also known as the kneecap. The role of the patella is to protect the knee joint and to enable the quadriceps muscle to work with more strength. At the front of the knee there is a groove (patellofemoral groove) that provides a space for the patella to sit. This groove allows the patella to glide up and down over the knee joint as the knee bends and straightens. The patella is held in position by tendons at its top and bottom surfaces. Ligaments on either side play important role in maintaining the alignment of the patella
Instability of the patella results from the patella sliding too far to either side of the groove or out of the patellofemoral groove altogether known as dislocation.
Patella instability can occur at any age. The most common causes of patella instability in children include:
- A sharp blow to the patella, such as a fall, may push the patella out of place. For children with normal knee structures, this is a common cause of patella instability. It is often the result of high impact sports such as football.
- Movements associated with sports or activities that involve sudden stopping such as dancing or twisting on the knee such as the motion of swinging a bat.
- The femoral grove in the femur, that the patella sits, in may be uneven or shallow. This leads to higher risk the patella will slip out of place.
- Ligaments in some children are loose, leading to instability and dislocation of joints. This is more common in girls than boys and can affect both knees.
- Children may be born with patella instability. This usually results in the patella dislocating without pain, however this is rare.
The most common symptoms of patella instability in children are:
- Visual appearance of the knee – often the patella spontaneously returns to the patellofemoral groove following dislocation, however, if not, the patella will appear to the side of the knee, most often the outer side.
- The knee buckles or gives way
- A popping sound when the patella dislocates
Where the patella is dislocated and has not spontaneously returned to the patellofemoral groove, the patella will need to be corrected. This is known as reduction and will normally be performed an experienced health professional.
Diagnosis of patella instability without visible signs of dislocation involve assessment of symptoms and clinical assessment of the knee’s movement.
An x-ray will assist your doctor to assess how symmetrically the patella sits in the patellafemoral groove.
A CT scan may be performed to look at how your bones are developing. This is often a useful scan for planning surgery if realignment of the bones is necessary.
A MRI may also be required to check for any injury to surrounding tissues following the initial injury.
Initial patella injuries should be treated with:
Where patella injury or dislocation occurs as a result of a once off event such as a sporting injury, the knee may be treated conservatively. Conservative measures may include:
- Bracing to immobilize
- Crutches to reduce weight bearing
- Physiotherapy to strengthen the leg muscles to hold the patella in place to prevent future dislocations.
An unstable patella that has not responded to non-surgical treatment may require arthroscopic surgery to repair or tighten structures within the knee to hold the patella firmly within the patella-femoral groove. Also, surgery may be required if the smooth cartilage of the femur or patella has been damaged at the time of the dislocation. Depending on the reason for surgery the exact procedure will differ, but your surgeon will discuss the the options with you. |
First grade offers an amazing year of growth and knowledge!
Students learn a great deal as they become better readers, writers, and thinkers, gaining knowledge and skills in Language Arts, Mathematics, Science, and Social Studies. They also participate daily in one of the Encore classes of Physical Education, Music, Art, Guidance, or Library. They learn social skills, organizational skills, and routines that help them meet expectations for school success.
Our academic curriculum is based on, but not limited to the goals outlined by the Virginia Standards of Learning (SOL’s). Teachers use a variety of teaching methods to accommodate each student’s learning style and developmental stage as the student attempts to master these goals.
As classroom teachers, we meet weekly to plan and share ideas; however, we are given the professional freedom to instruct from our individual perspectives. You will find that our classrooms are not exact copies of each other, and each offers instruction based on the best practices and research in our profession. On our individual teacher websites you will find links and additional information about each homeroom class.
Whole group, small group, and individualized lessons are employed by teachers as needed throughout the day. Children are assessed formally and informally in each subject area. Standardized tests such as PALS and Guided Reading Assessments are administered and classwork and participation in class discussions also reveal students’ knowledge. As teachers we understand the importance of getting to know each student as an individual learner and plan for his or her continued progress. We encourage regular communication between home and school so that each student has the best opportunity for success.
The first grade curriculum strives to provide:
- an inviting atmosphere which is conducive to learning.
- an enriched environment in which learning is our priority.
- a program where each child progresses to their maximum potential in social, mental, physical, and emotional pursuits.
- a curriculum which is guided by the Standards of Learning but exceeds these standards when appropriate.
- a variety of teaching methods to meet each child’s learning style.
- essential skills and exposure to quality literature and a love of reading.
- the desire to become life-long learners. |
- 1 Definition
- 2 Details
- 3 Supplementary
- 4 Further reading
- 5 Notes
The term disaccharide etymologically means two saccharides. A saccharide refers to the unit structure of carbohydrates. Thus, a disaccharide is a carbohydrate comprised of two saccharides (or two monosaccharide units).
The term sugar can refer to both monosaccharides and disaccharides. Monosaccharides are also called simple sugars since they are the most fundamental type of sugar. The term table sugar or granulated sugar actually refers to sucrose, which is a disaccharide made of two monosaccharides: glucose and fructose.
Carbohydrates are organic compounds comprised of carbon, hydrogen, and oxygen, usually in the ratio of 1:2:1. They are one of the major classes of biomolecules. They are an important source of energy. They also serve as structural components. As a nutrient, they can be classified into two major groups: simple carbohydrates and complex carbohydrates. Simple carbohydrates, sometimes referred to as simply sugar, are those that are readily digested and serve as a rapid source of energy. Complex carbohydrates (such as cellulose, starch, and glycogen) are those that need more time to be digested and metabolized. They often are high in fiber and, unlike simple carbohydrates, they are less likely to cause spikes in blood sugar.
Characteristics of disaccharides
Similar to other carbohydrates, disaccharides are comprised of hydrogen, carbon, and oxygen, and the ratio of hydrogen atoms to oxygen atoms is often 2:1, which explains why they are referred to as hydrates of carbon. The general chemical formula of disaccharides is C12H22O11. Because of the presence of carbon and C-C and C-H covalent bonds, disaccharides are also organic compounds, just as the other carbohydrates.
A disaccharide is a carbohydrate or a sugar comprised of two monosaccharides joined together by a glycosidic bond (or glycosidic linkage). Monosaccharides are the most fundamentals type of carbohydrate. Glycosidic bonds are covalent bonds that may form between the hydroxyl groups of two monosacccharides. Thus, even if they have the same chemical formula, there are different kinds of disaccharides that differ in bond formations, as well as monosaccharide constituents, and therefore, different properties.
Disaccharides differ from other forms of carbohydrates, oligosaccharides and polysaccharides, in the number of monosaccharide units that make them up. Disaccharides are made up of only two whereas oligosaccharides are made up of three to ten monosaccharides. Polysaccharides, as the name implies, contain several monosaccharide units.
Synthesis of disaccharides
The chemical process of joining monosaccharide units is referred to as dehydration synthesis since it results in the release of water as a byproduct. Disaccharides are formed by displacing a hydroxyl radical from one monosaccharide and a proton from the other monosaccharide, and then causing the two monosaccharides to covalently link together. The detached hydroxyl radical and proton (hydrogen ion), in turn, join and form a water molecule. Thus, one way of synthesizing a disaccharide is through the condensation of two monosaccharides.
A disaccharide may be reverted to its monomeric monosaccharide components through hydrolysis with the help of the enzyme disaccharidases (e.g. sucrase, lactase, and maltase for the degradation of sucrose, lactose, and maltose, respectively). While condensation reaction involves the elimination of water, hydrolysis utilizes a water molecule.
Classifications of disaccharides
Disaccharides may be classified into reducing and non-reducing. A reducing disaccharide is a disaccharide in which the reducing sugar has a free hemiacetal unit that may serve as a reducing aldehyde group. Examples of a reducing disaccharide are maltose and cellobiose.
Non-reducing disaccharides, as their name implies, are disaccharides that do not act as a reducing agent. Both monosaccharides that make up the disaccharide do not have a free hemiacetal unit since they bond through an acetal linkage between their anomeric centers. Examples are sucrose and trehalose.
There are several forms of disaccharides but the most common ones are sucrose, lactose, and maltose. These three are made up of two monosaccharides joined by a covalent bond. The general chemical formula is C12H22O11.
Sucrose (common table sugar) is a disaccharide formed by the combination of glucose and fructose. These two monosaccharides combine through a condensation reaction. They are linked through a glycosidic linkage between C-1 (on the glycosyl unit) and C-2 (on the fructosyl unit). Sucrose is digested or broken down into its monosaccharide units through hydrolysis with the help of the enzyme, sucrase. The bond that joins the two monosaccharides is broken, converting sucrose to glucose and fructose. Sucrose is extracted from plants, e.g. sugar cane and sugar beet, and processed (refined) to be marketed as common table sugar. It is used as a sweetening agent in food and beverages.
Lactose (milk sugar) is formed by the combination of glucose and galactose. It has a chemical formula of C12H22O11. Lactose is produced naturally and is present in milk of mammals, including humans. It is collected from bovine to be used in preparing infant formulas. A cow's milk, in particular, has about 4.7% lactose. Lactose is digested or broken down into its monosaccharide units through hydrolysis with the help of the enzyme lactase. The bond that joins the two monosaccharides is broken, converting lactose to glucose and galactose. People who are lactose intolerant cannot digest or break down lactose. This becomes food for gas-producing gut flora. This could lead to gastrointestinal disturbance and flatulence. Lactose can be converted to lactic acid. Microorganisms, such as Lactobacilli, can convert lactose to lactic acid, which is used in the food industry, e.g. in the production of dairy products like yogurt and cheese.
Maltose (malt sugar) is a reducing disaccharide formed when two glucose monomers join together via α(1→4) glycosidic bond. Thus, it may also be considered as the structural unit of glycogen and starch. Maltose is digested or broken down into its monosaccharide units through hydrolysis with the help of the enzyme, maltase. The bond that joins the two glucose units is broken, converting maltose to two glucose units. Maltose is commercially used as a sweetener, a nutrient in infant feeding, and in bacteriological culture media. It is also used in pastries. It makes bread dough to rise when carbon dioxide is produced and released during the conversion of starch into maltose by reacting the starch with enzymes.
Other examples of disaccharides are lactulose, chitobiose, kojibiose, nigerose, isomaltose, sophorose, laminaribiose, gentiobiose, turanose, maltulose, trehalose, palatinose, gentiobiulose, mannobiose, melbiose, melibiulose, rutinose, rutinulose, and xylobiose.
Dietary disaccharides, just as the other carbohydrates, are a source of energy. Disaccharides are consumed and digested so as to obtain monosaccharides that are important metabolites for ATP synthesis. ATPs are chemical energy biologically synthesized through aerobic and anaerobic respirations. Glucose is the most common form of monosaccharide that the cell uses to synthesize ATP via substrate-level phosphorylation (glycolysis) and/or oxidative phosphorylation (involving redox reactions and chemiosmosis). And one of the sources of glucose is a disaccharide-containing diet. Sucrose, the common table sugar, is used commonly as a sweetener. It is used in beverages and food preparation, such as cake and cookies. When consumed, the enzyme invertase in the small intestine cleaves sucrose into glucose and fructose. Too much fructose, though, could lead to malabsorption in the small intestine. When this happens, unabsorbed fructose transported to the large intestine could be used in fermentation by the colonic flora. This could lead to gastrointestinal pain, diarrhea, flatulence, or bloating. Too much glucose could also be a health hazard. Excessive consumption of sugar could lead to diabetes, obesity, tooth decay, and cardiovascular diseases. Lactose, a disaccharide found in breast milk, is used as a nutrient source for infants. Microorganisms, such as Lactobacilli, can convert lactose to lactic acid, which is used in the food industry, e.g. in the production of dairy products like yogurt and cheese. Maltose may be used as a sweetener although it is much less sweet than sucrose.
Vascular plants form disaccharides, especially sucrose, as a nutrient to be transported to various parts of the plant via the phloem tissues. Sugarcane, most especially, are harvested to make commercialized sugar.
- Ancient Greek δίς (dís, meaning “twice”) + saccharide
- double sugar
More info relating to carbohydrates and their role in our diet can be found in the developmental biology tutorial investigating a balanced diet. []
© Biology Online. Content provided and moderated by Biology Online Editors |
Isocrates, an ancient Greek rhetorician, was one of the ten Attic orators. In his time, he was probably the most influential rhetorician in Greece and made many contributions to rhetoric and education through his teaching and written works.
Unlike most rhetoric schools of the times which were taught by itinerant sophists, Isocrates defined himself with his treatise Against the Sophists. This polemic was written to explain and advertise the reasoning and educational principles behind his newly opened school. He promoted his broad-based education by speaking against two types of teachers: the Eristics who disputed about theoretical and ethical matters and the Sophists, who taught political debate techniques.
Isocrates was born to a wealthy family in Athens and received a fine education. He was greatly influenced by his sophist teachers, Prodicus and Gorgias, and was also closely acquainted with Socrates. After the Peloponnesian War, Isocrates' family lost its wealth, and Isocrates was forced to earn a living. |
Harriet Tubman escapes slavery to Philadelphia. Tubman made use of the network known as the Underground Railroad. This informal, but well-organized, system was composed of free and enslaved blacks, white abolitionists, and other activists.
Most prominent among the latter in Maryland at the time were members of the Religious Society of Friends, often called Quakers. The Preston area near Poplar Neck in Caroline County contained a substantial Quaker community, and was probably an important first stop during Tubman’s escape. From there, she probably took a common route for fleeing slaves – northeast along the Choptank River, through Delaware and then north into Pennsylvania. A journey of nearly 90 miles, her traveling by foot would have taken between five days and three weeks.
Tubman had to travel by night, guided by the North Star, and trying to avoid slave catchers, eager to collect rewards for fugitive slaves. The “conductors” in the Underground Railroad used a variety of deceptions for protection. At one of the earliest stops, the lady of the house ordered Tubman to sweep the yard to make it appear as though she worked for the family.
When night fell, the family hid her in a cart and took her to the next friendly house. Given her familiarity with the woods and marshes of the region, it is likely that Tubman hid in these locales during the day. Because the routes she followed were used by other fugitive slaves, Tubman did not speak about them until later in her life.
Read more about Tubman’s journey to freedom at: Daily Black History Facts |
An in-depth look at how plants respond to climate change shows mixed results for the phenomenon of "demographic compensation" as a way for plants to avoid severe population declines.
Demographic compensation has served as a possible explanation for the survival of plants that haven't shifted geographic ranges in tandem with changes in climate. It hypothesizes that decreases in some plant characteristics, like survival or growth, may be offset by increases in other plant characteristics, like flowering.
To test the demographic compensation theory, researchers at North Carolina State University and the University of British Columbia surveyed 11,000 plants comprising 32 populations of scarlet monkeyflower, a perennial herb that grows throughout different climate zones in central Oregon, across California and into North Baja California in Mexico, to see how characteristics like survival, growth and flowering differed in plants at more northern and southern latitudes.
Seema Sheth, an assistant professor in the Department of Plant and Microbial Biology at NC State and lead author of a paper on the research, said the results were a mixed bag.
"We found strong evidence for demographic compensation across the scarlet monkeyflower's geographic range," Sheth said.
In the five-year study period, plant survival and growth rates were low in the southern edges of the plant's geographic range - in California near the Mexican border - but flowering rates were high. In the northern edges of the plant's geographic range - central Oregon - survival and growth rates from one year to the next were higher than in the south, but plants didn't flower every year.
Sheth added that even though flowering rates were high in the south, many of these plants flowered once and then died. "Overall, the study suggests that all southern populations declined, so demographic compensation alone may not save these populations from extinction.
"But it's not all doom and gloom," she said. "Demographic compensation may buy these endangered populations some precious time for climatic conditions to improve or to allow evolutionary processes to help the plant adapt to unfavorable conditions."
Sheth said that the 2010-2014 study, which began while she and Angert were at Colorado State University, occurred during record hot and dry years in California. Rather than skewing the study, she said that the conditions faced by plants during the study period are expected to become more common due to climate change.
Sheth plans to follow up with a study that will take a "resurrection approach." She will "resurrect" plants from seeds collected across the scarlet monkeyflower's geographic range before and after the 2010-2014 study to learn about the impact of strong climatic events on the genetic variation of important traits. If southern plants needed to flower early in order to survive, for example, Sheth may be able to see noticeable selection of genes involved in early flowering, thereby limiting the genetic variability of this important trait in those southern plants.
"This approach allows us to resurrect pre-drought ancestors from stored seeds and compare them to post-drought descendants in the same environment, essentially allowing us to travel in time," Sheth said.
The study appears in Proceedings of the National Academy of Sciences. Amy Lauren Angert of the University of British Columbia co-authored the paper. Funding from the National Science Foundation (grant DEB-0950171) awarded to Angert and John Paul (now at the University of San Francisco) supported the work.
Note to editors: An abstract of the paper follows.
"Demographic compensation does not rescue populations at a trailing range edge"
Authors: Seema Sheth, North Carolina State University; Amy Lauren Angert, University of British Columbia
Published: Feb. 20, 2018 in PNAS
Abstract: Species' geographic ranges and climatic niches are likely to be increasingly mismatched due to rapid climate change. If a species' range and niche are out of equilibrium, then population performance should decrease from high-latitude "leading" range edges, where populations are expanding into recently ameliorated habitats, to low-latitude "trailing" range edges, where populations are contracting from newly unsuitable areas. Demographic compensation is a phenomenon whereby declines in some vital rates are offset by increases in others across time or space. In theory, demographic compensation could increase the range of environments over which populations can succeed and forestall range contraction at trailing edges. An outstanding question is whether range limits and range contractions reflect inadequate demographic compensation across environmental gradients, causing population declines at range edges. We collected demographic data from 32 populations of the scarlet monkeyflower (Erythranthe cardinalis) spanning 11? latitude in western North America and used integral projection models to evaluate population dynamics and assess demographic compensation across the species' range. During the 5-year study period, which included multiple years of severe drought and warming, population growth rates decreased from north to south, consistent with leading-trailing dynamics. Southern populations at the trailing range edge declined due to reduced survival, growth, and recruitment, despite compensatory increases in reproduction and faster life history characteristics. These results suggest that demographic compensation may only delay population collapse without the return of more favorable conditions or the contribution of other buffering mechanisms such as evolutionary rescue. |
There are a variety of algae species occurring in different habitats in the world such as fresh water, marine, desert sand and snow. These photosynthetic organisms are vital since aquatic animals feed on the organic matter they produce. Algae also take up excess nutrients such as phosphate, ammonia and nitrate, which can intoxicate marine life from the water. Algae grow fast in warm temperatures and water and when there is a lot of organic material in the water. However, algae blooms may result in mortality of aquatic life, since they affect the PH of the water.
Optimal Conditions for Algae Growth
Most algae thrive and multiply in water with high pH levels ranging between seven and nine. The optimum pH for most algae species is 8.2 to 8.7. Neutral or lower water pH decreases the growth of algae. Algae, like other plants, utilize light to photosynthesize food for growth. Low temperatures slow algae growth, which blooms and multiplies in warm temperatures of approximately 16 to 27 degrees.
PH Effects During the Day
During the day, photosynthesis takes place, due to the presence of sunlight. Algae draw carbon dioxide from the water to utilize during photosynthesis, promoting cell growth. Removal of carbon dioxide from the water raises the pH levels, as a result of the reduction in carbonate and bicarbonate levels of water, since they are used to replenish the lost carbon dioxide. Depletion of inorganic carbon from water by algae results in high pH levels, as evidenced by the rise in pH levels of natural waters, which can go up to 10 or beyond in the presence of algae. The rise of water pH also causes ionization of ammonia which is detrimental to aquatic life.
PH Effects at Night
At night, no photosynthesis takes place, so algae stops taking in carbon dioxide from water and goes into a respiratory stage 1. During this respiratory stage, algae consume oxygen that was produced during photosynthesis and release carbon dioxide into the water 1. This increased production of carbon dioxide decreases the pH levels in the water at night. Therefore, it is essential to control the algae growth, since they compete for oxygen with other aquatic animals at night.
Implications of Algae Effectson pH
As stated above, algae cause pH fluctuations in water during the day and at night 1. These pH fluctuations cause stress in aquatic animals and might lead to death or interfere with growth. Large numbers of algae are likely to cause more pH fluctuations, thus it is important to control algae blooms. Algae growth can be minimized by planting water plants such as water lilies, which will be competing for nutrients and light with the algae. |
What is a semicolon?
A semicolon, the hybrid between a colon and a comma, is often considered one of the more pompous punctuation marks. In reality, it gets a bad rap just because few people know how and when to use a semicolon. The semicolon is used to indicate a pause, usually between two main clauses, that needs to be more pronounced than the pause of a comma.
So what are the practical ways to implement this little grammatical workhorse? Read on to see when to use a semicolon and how it can help you merge connected thoughts, separate listed items clearly, and form a bridge to another sentence.
Why use a semicolon?
In the classic grammar and style manual The Elements of Style by William Strunk and E.B. White (first published in 1919), the case for the semicolon is laid out clearly: “If two or more clauses, grammatically complete and not joined by a conjunction, are to form a single compound sentence, the proper mark of punctuation is a semicolon.” In simpler terms, that means you can use a semicolon to separate two complete sentences that are related but not directly linked by a connecting word like “but” or “so.” For example: “She didn’t show up to work today; she said she had a headache.” These are other little-known punctuation marks we should be using.
Who uses semicolons?
The short answer: copy editors, professional writers, and you—if you’re savvy. “If words are the flesh and muscle of writing, then punctuation is the breath, and a good writer will make good use of it,” says Benjamin Dreyer of Penguin Random House, author of the forthcoming book Dreyer’s English. The semicolon is one of his favorite pieces of punctuation, and it was one of America’s great authors, Shirley Jackson, who inspired the admiration. “Shirley Jackson loved her semicolons,” says Dreyer. “I think that’s all the defense they need. The first paragraph of The Haunting of Hill House—one of the great opening paragraphs I can think of—includes three of them.” Here is Jackson’s sublime first paragraph:
“No live organism can continue for long to exist sanely under conditions of absolute reality; even larks and katydids are supposed, by some, to dream. Hill House, not sane, stood by itself against the hills, holding darkness within; it had stood so for eighty years and might stand for eighty more. Within, walls continued upright, bricks met neatly, floors were firm, and doors were sensibly shut; silence lay steadily against the wood and stone of Hill House, and whatever walked there, walked alone.”
Why and when to use a semicolon instead of a comma?
According to Dreyer, “independent sentences don’t hang together well with commas unless they’re as terse as ‘He came, he saw, he conquered,'” he explains. “For anything of greater length, a semicolon is simply better, stronger glue than a comma, while a period is too divisive.” It’s also grammatically incorrect to link two complete sentences using a comma; a semicolon acknowledges that they’re two complete sentences, even if they are related. Here are 12 more grammatical errors that even smart people make.
When to use a semicolon
It helps to think of a semicolon as sort of a soft period. “Semicolons provide the right link between two essentially independent thoughts that one wants to present as just shy of independence,” explains Dreyer. According to yourdictionary.com, “[The semicolon] shows a closer relationship between the clauses than a period would show.” Here’s an example: David was getting hungry; he suddenly regretted skipping breakfast.
When to use a semicolon in a list
In lists, we generally use commas to separate the items. For example, at the market, I’ll be picking up yogurt, blueberries, and coffee. However, sometimes there are lists that contain commas, so it gets confusing unless you separate those items using semicolons. For example, at the market, I’ll be picking up yogurt, which I know needs to be organic; blueberries, because they’re in season and on sale; and coffee, so Daddy will actually be able to wake up in the morning. Semicolons keep the items in the list neatly contained, so your meaning is always clear. Don’t miss these other common punctuation mistakes everyone needs to stop making.
When to use a semicolon before a transition
Use a semicolon to merge two sentences after a transitional phrase such as “however” and “as a result.” You probably already know to use a comma after the transitional phrase (“However, I still got the discount”), but you may not know that you can use a semicolon before the transitional phrase to form a bridge to the previous sentence (“The sale was officially starting on Saturday; however, I still got the discount on Friday because I had a special code”). You could technically use a period in that instance, but a semicolon signals that the thoughts are connected. Other examples: Everyone knows he deserves a raise; of course, he won’t get one with the current budget cuts. Her email is blowing up; for example, she got 50 messages in the last 10 minutes alone.
When not to use a semicolon
When you have a conjunction—a connecting word such as “but,” “and,” or “so”—a semicolon is unnecessary. In those cases, the correct punctuation mark is a comma. So it would be incorrect to write “Judy jogged on the pavement; but it wasn’t good for her knees.” The correct version, using a comma, would be “Judy jogged on the pavement, but it wasn’t good for her knees.” Of course, if you got rid of the “but,” a semicolon would be appropriate: “Judy jogged on the pavement; it wasn’t good for her knees.” Now that you’ve mastered when to use a semicolon, check out these 41 little grammar rules that help you sound smarter. |
While analyzing samples from carbon-rich meteorites with minerals that indicated they had experienced high temperatures, scientists found amino acids, which gives support to the theory that meteorites and comets assisted the origin of life.
Creating some of life’s building blocks in space may be a bit like making a sandwich – you can make them cold or hot, according to new NASA research. This evidence that there is more than one way to make crucial components of life increases the likelihood that life emerged elsewhere in the Universe, according to the research team, and gives support to the theory that a “kit” of ready-made parts created in space and delivered to Earth by impacts from meteorites and comets assisted the origin of life.
In the study, scientists with the Astrobiology Analytical Laboratory at NASA’s Goddard Space Flight Center in Greenbelt, Md., analyzed samples from fourteen carbon-rich meteorites with minerals that indicated they had experienced high temperatures – in some cases, over 2,000 degrees Fahrenheit. They found amino acids, which are the building blocks of proteins, used by life to speed up chemical reactions and build structures like hair, skin, and nails.
Previously, the Goddard team and other researchers have found amino acids in carbon-rich meteorites with mineralogy that revealed the amino acids were created by a relatively low-temperature process involving water, aldehyde and ketone compounds, ammonia, and cyanide called “Strecker-cyanohydrin synthesis.”
“Although we’ve found amino acids in carbon-rich meteorites before, we weren’t expecting to find them in these specific groups, since the high temperatures they experienced tend to destroy amino acids,” said Dr. Aaron Burton, a researcher in NASA’s Postdoctoral Program stationed at NASA Goddard. “However, the kind of amino acids we discovered in these meteorites indicates that they were produced by a different, high-temperature process as their parent asteroids gradually cooled down.” Burton is lead author of a paper on this discovery appearing March 9 in Meteoritics and Planetary Science.
In the new research, the team hypothesizes the amino acids were made by a high-temperature process involving gas containing hydrogen, carbon monoxide, and nitrogen called “Fischer-Tropsch” –type reactions. They occur at temperatures ranging from about 200 to 1,000 degrees Fahrenheit with minerals that facilitate the reaction. These reactions are used to make synthetic lubricating oil and other hydrocarbons; and during World War II, they were used to make gasoline from coal in an attempt to overcome a severe fuel shortage.
Researchers believe the parent asteroids of these meteorites were heated to high temperatures by collisions or the decay of radioactive elements. As the asteroid cooled, Fischer-Tropsch-type (FTT) reactions could have happened on mineral surfaces utilizing gas trapped inside small pores in the asteroid.
FTT reactions may even have created amino acids on dust grains in the solar nebula, the cloud of gas and dust that collapsed under its gravity to form the solar system. “Water, which is two hydrogen atoms bound to an oxygen atom, in liquid form is considered a critical ingredient for life. However, with FTT reactions, all that’s needed is hydrogen, carbon monoxide, and nitrogen as gases, which are all very common in space. With FTT reactions, you can begin making some prebiotic components of life very early, before you have asteroids or planets with liquid water,” said Burton.
In the laboratory, FTT reactions produce amino acids, and can show a preference for making straight-chain molecules. “In almost all of the 14 meteorites we analyzed, we found that most of the amino acids had these straight chains, suggesting FTT reactions could have made them,” said Burton.
It’s possible that both Strecker and FTT processes could have contributed to the supply of amino acids in other meteorites. However, evidence for the FTT reaction would tend to get lost because FTT reactions create them in much lower abundances than Strecker synthesis. If an asteroid with an initial amino acid supply from FTT reactions was later altered by water and Strecker synthesis, it would overwrite the small contribution from the FTT reactions, according to the team.
The team believes the majority of the amino acids they found in the 14 meteorites were truly created in space, and not the result of contamination from terrestrial life, for a few reasons. First, the amino acids in life (and in contamination from industrial products) are frequently linked together in long chains, either as proteins in biology or polymers in industrial products. Most of the amino the amino acids discovered in the new research were not bound up in proteins or polymers. In addition, the most abundant amino acids found in biology are those that are found in proteins, but such “proteinogenic” amino acids represent only a small percentage of the amino acids found in the meteorites. Finally, the team analyzed a sample of ice taken from underneath one of the meteorites. This ice had only trace levels of amino acids suggesting the meteorites are relatively pristine.
The experiments showing FTT reactions produce amino acids were performed over 40 years ago. The products have not been analyzed with modern techniques, so the exact distributions of amino acid products have not been determined. The team wants to test FTT reactions in the laboratory using a variety of ingredients and conditions to see if any produce the types of amino acids with the abundances they found in the 14 meteorites.
The team also wants to expand their search for amino acids to all known groups of carbon-rich meteorites. There are eight different groups of carbon-rich meteorites, called “carbonaceous chondrites.” The new work adds two additional groups to the three previously known to have produced amino acids, leaving three groups to be tested. These three remaining groups have a high metal content as well as evidence for high temperatures. “We’ll see if they have amino acids also, and hopefully gain some insight into how they were made,” says Burton. When the team began looking for amino acids in carbon-rich meteorites, it was considered somewhat of a long shot, but now: “We would be surprised if we didn’t discover amino acids in a carbon-rich meteorite,” says Burton.
The research was funded by the NASA Astrobiology Institute (NAI), the Goddard Center for Astrobiology, and the NASA Cosmochemistry Program. NAI is managed by NASA Ames Research Center in Mountain View, Calif. Dr. Burton was supported by the NASA Postdoctoral Program, administered by Oak Ridge Associated Universities through a contract with NASA. Meteorite samples were provided by Dr. Kevin Righter of NASA’s Johnson Space Center, Houston, Texas.
Image: Antarctic Search for Meteorites program, Case Western Reserve University |
Sea level rise — resulting in the flooding and eventual disappearance of land — is one of the most well-known and serious problems facing humans as a result of climate change. Low-lying areas around the world are all at risk. By the end of this century, land currently home to 200 million people will likely be permanently below the high-tide line.
Because of the economic benefits associated with access to water — for example, shipping routes, fisheries, tourism, and recreation — dense urban areas have long been concentrated along coastal regions. Today, about 40% of the world’s population lives within 60 miles of the coast.
As the climate crisis intensifies, however, coastal living has become a major liability. Global sea levels are projected to rise between 2 and 7 feet by 2050, and possibly more. By the end of the century, sea levels could rise as much as 20 feet. In this scenario, the United States alone would lose nearly 50,000 square miles, which today are home to 23.4 million people.
Frequent flooding is one of the first problems associated with sea level rise. Coastal areas of Bangladesh and Vietnam, for example, which are home to 43 million and 31 million people, respectively, are likely to experience saltwater flooding several times a year by 2050.
Two of the cities selected for our list are in the United States. For more, here are the American cities that will soon be underwater.
24/7 Wall St. used data from Climate Central — an independent organization of scientists and journalists researching and reporting on climate change — to identify 25 major metropolitan areas and urban agglomerations that are projected to be at risk of substantial flooding by 2050 as a result of sea level rise due to climate change. The data assembled by Climate Central is documented in the report, “New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding.” published in October 2019 in Nature:
These projections are near the high end of the range of sea level futures anticipated by the scientific community as of 2019. We elected to concentrate on the high-risk scenario because, while there are still steps the world can take to address the problems of climate change, we are not on track to meet the Paris Agreement’s goals. Here are 21 strategies that could avert climate disasters. |
Literacy is both an academic and a life skill. The main elements of literacy are: reading, writing and oracy (speaking and listening). In order to encourage our pupils to achieve in literacy we deliver a curriculum which is bespoke to the individual needs, abilities and potential of all our students.
In order to ensure that we have an accurate picture of each child's needs, we use the following summative assessments:
Literacy Numeracy Framework
Child Development Profile
Salford Reading Test
Salford Comprehension Test
At Ysgol Ty Coch we believe that a curriculum can only be truly effective if it is not only bespoke, but also inclusive for all pupils. With this in mind we have been carrying out our own action research into the reading abilities of pupils with Autism Spectrum Disorder, with a particular focus on nonverbal pupils.
Delivering reading tests to nonverbal or minimally verbal pupils can be a challenge, as the majority of reading tests currently available are reliant on a person's ability to vocalise as they read. Seventy schools took part in a survey which we had published (Arnold and Reed 2016) in the British Journal of Special Education. Of those schools who took part, none felt happy with the reading assessments available for nonverbal pupils with 100% agreeing that these tests do not provide an accurate ability of reading ability for these students. A result of this discontent was that 30/70 schools were not using any form of summative assessment with their nonverbal students.
With this in mind, we designed a novel digital form of reading test comparable to the Salford Reading Test, which does not rely on a student's ability to verbalise. Results showed that some of our nonverbal students with ASD are also some of our best readers! Having this information has enabled us to adapt the curriculum for these students in such a way as to further foster progress and enhance their enjoyment of reading.
Arnold, S. & Reed, P., 2016. Reading assessments for students with ASD: a survey of summative reading assessments used in special educational schools in the UK. British Journal of Special Education, 43(2), pp.122–141. Available at: http://doi.wiley.com/10.1111/1467-8578.12127. |
Grade Level: Elementary school
Time Required: 45 minutes
Expendable Cost/Group: US $0.40
Group Size: 2
Subject Areas: Chemistry, Physical Science, Problem Solving, Reasoning and Proof, Science and Technology
Bolded words are vocabulary and concepts to highlight with students during the activity.
Who has heard of chromatography? Chromatography is a way to look at complex mixtures by separating them into their components. Criminal investigators use this technology to identify substances such as chemicals, blood, ink and other fluids. And, environmental engineers use chromatography to prepare solutions to monitor and test groundwater for contaminants. Different inks have different properties, such as how well they dissolve in certain types of solvents. When you dip a portion of the chromatography paper into the solvent, the solvent begins to move up the paper. As the solvent rises up the paper, it dissolves the ink on the paper into its components. The ink components travel up the paper with the solvent, and the distance they travel is based on how readily each component dissolves in the solvent. What are these components? We are about to find out!
Each group needs:
- 1 coffee filter
- isopropyl alcohol
- 2 popsicle sticks
- 2 clear plastic cups
- black and colored permanent markers
- paper and pencils
Students use isopropyl alcohol to separate the components of black and colored permanent ink on coffee filters—the outcome of which is a surprise.
- Divide the class into groups of two students each. Hand out the supplies, excluding the isopropyl alcohol.
- Explain to students that their goal today is to use isopropyl alcohol to separate the different colors contained within various permanent marker inks.
- Have groups cut their coffee filters into half inch-wide strips, making sure the ends are square (not rounded).
- Students then take a black marker and draw a line across one strip of coffee filter half an inch from one end. (Note: The line placement needs to be fairly precise, so help students, as necessary.)
- Then, have groups tape the strip to a popsicle stick so that the end with the marker line hangs down from the stick and the bottom of the filter strip just barely rests on the bottom of the cup (see the image).
- When groups are ready, an instructor pours isopropyl alcohol into their cups, ensuring the marker line is above the top of the liquid. (Note: Push the popsicle stick to the edge of the cup to avoid accidentally pouring alcohol on the strip.)
- Have students watch as the liquid moves up the filter and pulls the colored inks out of the ink line.
- When students have finished, ask them to write a description of the colors they found in their black ink.
- Have students experiment with colored markers and also with water rather than alcohol to see if they obtain different results. Have them report their results on their papers.
- Have students compare results and discuss their experiment outcomes.
Wrap Up - Thought Questions
- Were you surprised by the colors that you found in any of the inks?
- Why did you use isopropyl alcohol when doing this experiment?
- Why can't the alcohol touch the line of ink?
More Curriculum Like This
To increase students' awareness of possible invisible pollutants in drinking water sources, students perform an exciting lab requiring them to think about how solutions and mixtures exist even in unsuspecting places such as ink. They use alcohol and chromatography paper to separate the components of...
This lesson plan introduces students to the properties of mixtures and solutions. It includes teacher instructions for a class demonstration that gives students the chance to compare and contrast the physical characteristics of some simple mixtures and solutions.
Students learn how to classify materials as mixtures, elements or compounds and identify the properties of each type. The concept of separation of mixtures is also introduced since nearly every element or compound is found naturally in an impure state such as a mixture of two or more substances, and...
Student teams are challenged to evaluate the design of several liquid soaps to answer the question, “Which soap is the best?” Through two simple teacher class demonstrations and the activity investigation, students learn about surface tension and how it is measured, the properties of surfactants (so...
Copyright© 2013 by Regents of the University of Colorado
Last modified: September 22, 2017 |
The “digital divide” was a term originally coined in the early 2000s to describe the “have” and “have-nots” of computers and mobile technology. There was great concern that low-income children would be left behind because of their lack of technology in the home. In the United States, the middle-class predominantly white families who were able to afford computers (and later mobile technology) were able to allow their child to experience (and learn) so much more through the internet accessed on these devices. A number of things addressed these fears, including the decreasing cost of computers. This helped bridge the digital divide, but nothing had quite the effect of the one-to-one programs we now see in so many school districts, including those in low-income areas. All children could have access to the internet. Digital Divide closed. The problem is there is little evidence to support the idea that technology in schools improves learning outcomes.
Read the full article here |
Language skills are a necessity and a right for EVERYONE – that is one of the main messages of the European Day of Languages.
The overall objectives are to raise awareness of:
- Europe’s rich linguistic diversity, which must be preserved and enhanced;
- the need to diversify the range of languages people learn (to include less widely used languages), which results in plurilingualism;
- the need for people to develop some degree of proficiency in two languages or more to be able to play their full part in democratic citizenship in Europe.
Year 7 pupils completed a language challenge competition for the European Day of Languages. Pupils raced to speak to teachers to work out a list of phrases in different languages. |
Once upon a time, there was a man named Joseph who liked to make maps. Joseph was born and educated in France. He was a talented young man who was good at math and science. When he finished his schooling, he became an astronomer in Paris. Joseph’s life was a fine one, full of prestige and friends.
All did not remain well, however. Joseph tried using mathematical probability to play the stock market. This cost him and his friends a lot of money. Joseph’s friends shunned him. With his life in ruins, he decided to move to a new land for a fresh start. This land was the United States of America and he arrived on its shores in 1832.
Joseph needed employment. As an astronomer, he thought he could get work with the government, so he paid a visit to Washington, D.C. There he made contact with officials who wanted maps and surveys of America’s little-known lands. Using his astronomical and scientific background, Joseph could become a mapmaker. But with a land the size of America, what area should he map? He decided to make a map of the Mississippi River valley. The government wasn’t much help to Joseph. With no money or authorization, he was sent on his way with a letter of introduction.
Undaunted, Joseph studied the notes and maps made by previous explorers. While he was unable to get notes from Lt. Zebulon Pike’s 1805-06 expedition up the Mississippi River, he did have access to information on Gov. Lewis Cass’ 1820 expedition, as well as Lt. James Allen’s maps made on Henry Rowe Schoolcraft’s 1832 expedition.
Joseph set out to survey the Mississippi River in December 1832. Traveling lightly, he made his home with whoever would take him in. Throughout his journey, Joseph used various instruments, such as a sextant, a chronometer, and a barometer, to map his location on the earth. The accuracy of his map depended upon his astronomical readings.
Joseph wandered the southern regions of the Mississippi River valley for years. The life of the mapmaker was not an easy one. Joseph was frequently ill due to his weak constitution and exposure to the elements. He was nothing if not determined and, after studying the southern portion of the river, he turned his attention to the location of the source of the river.
In the summer of 1836, he arrived at Fort Snelling, where he was taken in by Indian Agent Lawrence Taliaferro’s family. Taliaferro gave Joseph all the supplies he needed for his visit to the source of the river.
As Joseph traveled north on the river with a few companions, he continued making notes on his geographical position and drawing the landscape. He also met with Indian inhabitants and carefully recorded their names for geographical features. When he reached Lake Itasca at the end of August 1836, he spent several days investigating this source of the great Mississippi River. He then made the return trip, which brought him back to Fort Snelling at the end of September of that year. Joseph spent the winter at the fort creating his map of the Upper Mississippi River valley. Mrs. Elizabeth Taliaferro was particularly kind to Joseph during his stay, making him a special porridge to ease his upset stomach.
Within the next seven years, Joseph’s fortunes improved somewhat. The government was impressed with his map and paid him for it. He was also appointed to lead an expedition by the Bureau of Topical Engineers. Eventually, Joseph wound up back in Washington, D.C., where he compiled his information. With the assistance of John C. Fremont, Joseph created a version of his map for engraving called the “Hydrographical Basin of the Upper Mississippi River from Astronomical and Barometrical Observations, Surveys and Information”. This map was the most accurate and complete map made to date of the Upper Mississippi River valley. It corrected Lt. Zebulon Pike’s map and introduced map-making procedures that continue to the current day.
Joseph, the careful mapmaker, is none other than Joseph N. Nicollet. He died on July 17, 1843 at 57 years of age. He never completely recovered from the ruined reputation he had earned in France, but, through his map, he gained a prestige far greater.
By Mary Warner
Copyright 2004, Morrison County Historical Society
Article sources: The Journals of Joseph N. Nicollet, translated by Andre Fertey, edited by Martha Coleman Bray, 1970, Minnesota Historical Society.
David Rumsey Map Collection website at http://www.davidrumsey.com/maps1840.html. |
Tritium is the only naturally-occurring radioisotope of hydrogen. Its atomic number is naturally 1 which means there is 1 proton and 1 electron in the atomic structure. Unlike the hydrogen nucleus and deuterium nucleus, tritium has 2 neutrons in the nucleus. Tritium is naturally-occurring, but it is extremely rare. Tritium is produced in the atmosphere when cosmic rays collide with air molecules. Tritium is also a byproduct of the production of electricity by nuclear power plants. The name of this isotope is formed from the Greek word τρίτος (trítos) meaning “third”.
Decay of Tritium
Tritium is a radioactive isotope, bur it emits a very weak form of radiation, a low-energy beta particle that is similar to an electron. It is a pure beta emitter (i.e. beta emitter without an accompanying gamma radiation). The electron’s kinetic energy varies, with an average of 5.7 keV, while the remaining energy is carried off by the nearly undetectable electron antineutrino. Such a very low energy of electron causes, that the electron cannot penetrate the skin or even does not travel very far in air. Beta particles from tritium can penetrate only about 6.0 mm of air.
Tritium decays via negative beta decay into helium-3 with half-life of 12.3 years.
Tritium in nuclear reactors
Tritium is a byproduct in nuclear reactors. Most important source (due to releases of tritiated water) of tritium in nuclear power plants stems from the boric acid, which is commonly used as a chemical shim to compensate an excess of initial reactivity. Main reactions, in which the tritium is generated from boron are below:
10B(n,T + 2*alpha)
This threshold reaction of fast neutron with an isotope 10B is the main way, how radioactive tritium in primary circuit of all PWRs is generated. 10B is the principal source of radioactive tritium in primary circuit of all PWRs (which use boric acid as a chemical shim). Note that, this reaction occurs very rarely in comparison with the most common (n,alpha) reaction of isotope 10B with thermal neutrons.
There are more reactions with neutrons, which can rarely lead to formation of radioactive tritium, for example:
10B(n,alpha)7Li + 7Li(n,n+alpha)3H – threshold reaction (~3 MeV).Boron 10. Comparison of total cross-section and cross-section for (n,alpha) reactions.
Source: JANIS (Java-based Nuclear Data Information Software); The JEFF-3.1.1 Nuclear Data Library[/captionBoron 10. Comparison of total cross-section and cross-section for (n,alpha) reactions.
Source: JANIS (Java-based Nuclear Data Information Software); The JEFF-3.1.1 Nuclear Data LibraryTritium is also a fission product (ternary fission) of the splitting of fissionable materials. In fact, fission probably produces more tritium than all other sources in Light Water Reactors. Its production (yield) is of about one atom per each 10,000 fissions. On the other hand only a very small fraction of the fission-product tritium diffuses out of the fuel matrix and fuel cladding into the primary coolant.
Tritium is also produced in reaction with 6Li.
This is a reaction allowing detection of neutrons, but in some cases, LiOH is added to control the pH of primary coolant in some LWR. The reaction cross-section for thermal neutrons is σ = 925 barns and the natural lithium has abundance of 6Li 7,4%.
Tritium occurs in nuclear power plants in the form of tritiated water. Tritiated water is like normal water, but is very very weakly radioactive. Therefore it dose not pose a hazard to human health. The releases of tritiated water are closely monitored by plant operators and state supervisors.
Reference: Jacobs D.G. Sources of Tritium and Its Behaviour Upon Release to the Environment. US Atomic Energy Commission, 1968.
Tritium in Nature
Tritium is produced in the atmosphere when cosmic rays collide with air molecules. In the most important reaction for natural production, a fast neutron (which must have energy greater than 4.0 MeV) interacts with atmospheric nitrogen:
Worldwide, the production of tritium from natural sources is 148 petabecquerels per year. In result, the tritiated water produced participates in the water cycle.
- about 400 Bq/m3 in continental water
- about 100 Bq/m3 in oceans
Tritium poses a risk to health as a result of internal exposure only following ingestion in drinking water or food, or inhalation or absorption through the skin. The tritium taken into the body is uniformly distributed among all soft tissues. An average annual dose from natural tritium intake is 0.01 μSv.
In case of artificial tritium ingestion or inhalation, a biological half-time of tritium is 10 days for HTO and 40 days for OBT (organically bound tritium) formed from HTO in the body of adults. It was also shown that the biological half-time of HTO depends strongly on many variables and varies from about 4 to 18 days. During the warmer months, the average half-life is lower, which is attributed to increased water intake. As well as, drinking larger amounts of alcohol will reduce the biological half-life of water in the body.
See also: Tritium in Nature |
Facts and Information for Educators About Ebola
As we continue to hear the recent news about Ebola — a third case in Texas and the continuation of the outbreak in West Africa — we all are becoming increasingly concerned. And, as the situation continues to unfold, educators have many questions about their own safety and that of their students. The CDC and other groups have produced some excellent resources that help explain how the disease is spread, what the symptoms are, and the steps educators should take in the event of a suspected case in their schools.
The NEA Health Information Network (HIN) has gathered the following resources from healthcare officials to ensure that educators are well-informed about Ebola:
- Ebola facts from the CDC: Provides the latest information on the progress to stem the outbreak in West Africa, clinical guidance, and recommendations for personal protective practices.
- The American Federation of Teachers (AFT) has prepared a two page document on preparing for Ebola in schools.
- The New York City Department of Health has created guidance for daycares and schools: receiving students and staff from areas affected by Ebola.
- How to discuss Ebola with your children from the American Academy of Pediatrics.
- Safety and Health Information on Ebola from OSHA.
- Ebola and Fear — a blog post by NEA HIN Executive Director, Jim Bender.
Fortunately, the risk of Ebola infection in the U.S. is still low. As the go-to source for NEA members about health and safety issues, NEA HIN will continue to monitor the situation and provide timely updates as well as relevant information for educators. |
Centrifuges are widely used in chemical, petroleum, food, pharmaceutical, mineral processing, coal, water treatment and shipbuilding sectors.
In ancient China, people used one end of a rope to hold a clay pot, and held the other end of the rope, rotating the pottery jar, and generating centrifugal force to squeeze out the honey in the pot. This is the early application of the centrifugal separation principle.
Industrial centrifuges were born in Europe. For example, in the middle of the 19th century, there were three-legged centrifuges for textile dehydration and top-suspension centrifuges for separating crystal sugar from sugar factories. These earliest centrifuges were batch operated and manually drained.
Due to the improvement of the slag discharge mechanism, a continuously operating centrifuge appeared in the 1930s, and the intermittent operation of the centrifuge was also developed due to the realization of automatic control.
Industrial centrifuges can be divided into three types: filtration centrifuge, sedimentation centrifuge and separator according to structure and separation requirements.
The centrifuge has a cylinder that rotates at a high speed about its own axis, called a drum, which is usually driven by an electric motor. After the suspension (or emulsion) is added to the drum, it is rapidly driven to rotate at the same speed as the drum, and the components are separated under the action of centrifugal force and discharged separately. Generally, the higher the drum speed, the better the separation. |
What is a constitution?
This film was made by International IDEA, to explain the importance of constitutions, and the role they play in underpinning the state. A constitution provides rules about how the country is run. It ensures that the state’s power is dispersed between different bodies and individuals, and that citizens’ rights are upheld.
A constitution provides the basis for governance in a country, which is essential to making sure that everyone’s interests and needs are addressed. It determines how laws are made, and details the process by which the government rules. |
Listen to this post
In principle, any substance whose properties visibly change with temperature can be used to measure temperature. We could have used a material that changes color when subjected to heat. One could discern that the temperature is high when the material would radiate hues of blue and that the temperature is low when it would radiate hues of red.
Similarly, water in a narrow tube rises or falls when its temperature rises or falls. This is the working principle of every sealed, liquid-containing thermometer. However, if something as abundant and cheap as water works so well, why do we insist on snubbing it for something as extremely rare and expensive as mercury?
Mankind learned about thermodynamic temperature centuries after the first thermometer was invented. Therefore, initially, thermometers defined temperatures. The first thermometers, however, weren’t thermos or heat measurers, but thermo-scopes, devices that merely signaled whether the temperature was high or low. These devices weren’t calibrated to a standard scale; they would just make crude or vague assessments.
The invention of the first thermometer is credited to Hero of Alexandria, a curious engineer who is considered the greatest experimenter of antiquity. His device consisted of a tube filled with air whose end was submerged in a tiny bowl of water. When the thermoscope touched a hot or cold surface, the air would expand or contract, causing the air-water interface to fluctuate.
Even Galileo’s invention worked on the same principle. However, not only were these developments bereft of any scale, but they were also sensitive to the air’s pressure. The urge to develop a device that solely responded to heat led Joseph Delmedigo, a student of Galileo’s, to invent the first sealed liquid-in-glass thermometer. This was the first thermometer because it was marked with a scale. However, the liquid he sealed wasn’t water, but alcohol.
Coefficient of Expansion
Materials, when under constant pressure, expand when subjected to heat because the heat elevates the kinetic energy of its atoms, causing them to violently move and therefore separate from each other. The increase in volume is evident in solids, such as metallic railway tracks, rubber tires and fluids like water, alcohol, mercury and halogen. However, the amount of expansion per degree increase in temperature is different for every material. This material constant is called the coefficient of expansion.
Alcohol is favored over water for the simple reason that it boasts a higher coefficient of expansion. Even a small change in temperature causes a drastic change in its volume along the tube. However, alcohol is so sensitive that these changes cause the alcohol in the tube to behave almost turbulently. The levels constantly waver with even minor changes in temperature. This capriciousness is disconcerting, as the reading would immediately change when, say, the thermometer is removed from a pot of boiling water whose temperature we wished to measure. It would then immediately reflect the temperature of its new environment.
To avoid this unreliability, Dutch inventor Danie Fahrenheit replaced alcohol with mercury. Mercury has a higher coefficient of expansion than water, which means that changes in its volume with temperature are more noticeable. However, its value is almost six times less than that of alcohol. For perspective, the rise or fall in alcohol’s volume per degree rise in temperature would be six times greater than mercury’s volume.
This means that mercury in a sealed tube would rise at a much slower pace than alcohol, but it also means that mercury would fall equally slowly when the thermometer is removed from a pot filled with boiling water. The reading would be effectively undisturbed, making the thermometer superiorly reliable.
Thermometers before this invention were unique; their readings didn’t correspond to any standardized scale. However, Fahrenheit proposed a scale that was adopted by every manufacturer of mercury and eventually, every type of thermometer. This transition wasn’t exactly cumbersome, since almost every mercury thermometer was manufactured by Fahrenheit himself. The scale, which was slightly altered at a later date, now bears his name.
Instances When Alcohol Fares Better
Alcohol’s hypersensitivity can be compensated by its virtues. Unlike mercury, alcohol is much cheaper and not as ridiculously rare. Also, it is not toxic. A lab might be required to be sealed for hours if a mercury thermometer breaks, considering that inhaling mercury can cause health serious problems. On the other hand, alcohol poses no such threat.
What’s more, alcohol’s freezing point is an astonishing -115ᵒC compared to mercury’s -40ᵒC. This means that mercury thermometers cannot measure temperatures below -40ᵒC, a temperature not all that rare in science laboratories or a superconductor technology manufacturing industry.
However, unlike alcohol, mercury isn’t colorless, a property that forces manufacturers to add artificial dyes to alcohol to make it clearly visible. Also, while alcohol can measure shockingly low temperatures, it is unable to measure temperatures greater than just 78.37ᵒC, alcohol’s boiling point. Compare that meager boiling point to mercury’s incredible 356.7ᵒC boiling point!
While nothing can be done about mercury’s rarity, expensiveness and toxicity, one can still overcome its thermal limitations. To further increase its boiling point, mercury is often sealed with an inert gas, such as nitrogen. The inert gas increases the pressure on the liquid mercury, thereby increasing its boiling point even further.
One can also extend its freezing point by alloying it with thallium. These new mercury-thallium thermometers can now measure temperatures as low as -62ᵒC. Still, despite their small flaws, mercury thermometers are regarded to be one of the most accurate thermometers available. |
In 1846 – as the Mexican-American War brought U.S. troops into conflict with Santa Ana’s forces – the Mexican province of Alta California found itself drawn into the political storm. Caught between Mexico’s mercantilism and the United States’ “Manifest Destiny,” Californians questioned their future. Would it be better to stay a neglected satellite, become a territory of a new country, or seize complete independence?
California’s past relations with Spain and Mexico had alternated between laissez-faire and pushy colonization. Americans were not “unknown persons” to the Californios in mid-19th Century. Both the neglect from Mexico and the welcoming hands from America set the stage of a momentous moment in Western, California, and American history.Discovered in 1542 by Juan Rodrigo Cabrillo and claimed for Spain, California remained a pristine wilderness – only inhabited by native peoples – for about 200 years. Other Spanish explorers sailed the coastline, making maps and searching for the elusive (and non-existent) Northwest Passage.
In 1769, an imaginary threat from Russian fur-trappers prompted the Spanish governor to organize colonization of California. The result was the Sacred Expedition, led by Captain de Portola and the Catholic missionary Junipero Serra. Establishing military presidios (forts) and religious missions, this expedition was the first known European group to reach Alta California by land. Eventually, a chain of 21 religious missions were built in an attempt to “Christianize” the native peoples and teach them “civilized” skills – including farming, cattle ranching, brick making, and leather tanning.
Mexico declared independence from Spain in 1810, but it took time for them to establish that freedom and their new government. During that period, Alta California continued under the control of governors without much direction from the southern country. More settlers continued to arrive; the pueblos expanded and large ranchos divided the best open land for livestock herds. Cattle became a foundation of the territory’s economy, and tanned leather was a valuable product.
From the North American east coast, the United States’ sailing ships circled the globe, bringing home foreign treasures and practical trade items. California had a market of leather and beef, and the Americans were willing to do business. Perhaps one of the most famous primary sources in California’s saga and American maritime history is Two Years Before The Mast by Richard Henry Dana, Jr. While detailing the hardships, injustices, and adventures of a common seaman on merchant ships, the account highlights trading for leather hides along the California coast. The book was published in 1840, and records voyages and trading accounts from the late 1830’s.
The Americans didn’t just come by sea. In 1826, a group of fur trappers led by explorer Jedidiah Smith stumbled into California, seeking refuge at Mission San Gabriel. They had crossed the deserts between Salt Lake, Utah, and the Pacific Ocean. The governor of California frowned suspiciously, and sent the trappers packing…or so he thought. In reality, they traveled north, exploring and hunting. And they returned the following year to retrieve their cache of furs and a few members who had been forced to stay behind.
In 1833, the golden age of the California Ranchos began while the Mission bells tolled. Mexico passed the Secularization Act of 1833, removing all missions and their extensive land grants from the control of the Catholic church. The lands were divided into large ranchos and the adobe churches and courtyards were partially dismantled as the rancheros took the materials to build their own dwellings.
During the 1830’s, some American businessmen and adventurers received permission to settle in California and became naturalized citizens. Thomas Larkin built the first known two-story house in Alta California and took his place in society as an influential business owner and U.S. consul to Mexico. A few American men were granted ranchos, took positions in California’s local government, or married into prominent families. Others just came, settled, and prospered without permission from the Mexican government.
Observing American prosperity and the good character of their new neighbors from the east, some families in California began wondering if they would have a stronger economy, more responsive government, and more opportunity if they broke away from Mexico. Certainly, Alta California was not a favored province of the mother country.
Meanwhile, some of the American immigrants started whispering about the possibility of somehow seizing California and making it a new state in the American union. The land was mostly undeveloped and held great opportunity for agriculture, logging, and other 19th Century economic pursuits. (Gold had not been discovered yet.)
Could California secede from Mexico? Should it remain an independent republic like Texas? Or petition immediately for statehood in the United States? If the latter, should it be a slave or free state? The questions swirled. The Californios grumbled about the neglect from Mexico, but continued to maintain their idyllic rancho or pueblo lifestyles.
The American settlers watched the local situation and followed reports of the war with Mexico. If the opportunity came and they could have military support, they would seize California for the United States. In the late spring of 1846, settlers’ frustration increased when the Alta California government refused to sell or rent land and issued threats of expulsion since the pioneers had arrived without special permission.
While U.S. troops fought along the Texas border and invaded Mexico, the Americans in California prepared to create a revolt and a new state. June 14, 1846 – a small group of Americans crept toward the town of Sonoma, ready to raise a new flag. The Bear Flag Revolt was underway…and it would alter the course of history, eventually bringing California’s land, opportunity, and still-hidden wealth to the United States as a free state. A decade and a half later, California gold would be financing the Union cause during the Civil War.
However, that was still in the future on the shadowy morning as the Americans made the first move to conquer California, persuading their neighbors to leave Mexico’s neglect and join the United States. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.